Could you expand on what you mean by ‘less automation’? I’m taking it to mean some combination of ‘bounding the space of controller actions more’, ‘automating fewer levels of optimisation’, ‘more of the work done by humans’ and maybe ‘only automating easier tasks’ but I can’t quite tell which of these you’re intending or how they fit together.
(Also, am I correctly reading an implicit assumption here that any attempts to do automated research would be classed as ‘automated ai safety’?)
Bounding the space of controller actions more is the key bit. The (vague) claim is that if you have an argument that an empirically tested automated safety scheme is safe, in sense that you’ll know if the output is correct, you may be able to find a more constrained setup where more of the structure is human-defined and easier to analyze, and that the originally argument may port over to the constrained setup.
I’m not claiming this is always possible, though, just that it’s worth searching for. Currently the situation is that we don’t have well-developed arguments that we can recognize the correctness of automated safety work, so it’s hard to test the “less automation” hypothesis concretely.
I don’t think all automated research is automated safety: certainly you can do automated pure capabilities. But I may have misunderstood that part of the question.
Could you expand on what you mean by ‘less automation’? I’m taking it to mean some combination of ‘bounding the space of controller actions more’, ‘automating fewer levels of optimisation’, ‘more of the work done by humans’ and maybe ‘only automating easier tasks’ but I can’t quite tell which of these you’re intending or how they fit together.
(Also, am I correctly reading an implicit assumption here that any attempts to do automated research would be classed as ‘automated ai safety’?)
Bounding the space of controller actions more is the key bit. The (vague) claim is that if you have an argument that an empirically tested automated safety scheme is safe, in sense that you’ll know if the output is correct, you may be able to find a more constrained setup where more of the structure is human-defined and easier to analyze, and that the originally argument may port over to the constrained setup.
I’m not claiming this is always possible, though, just that it’s worth searching for. Currently the situation is that we don’t have well-developed arguments that we can recognize the correctness of automated safety work, so it’s hard to test the “less automation” hypothesis concretely.
I don’t think all automated research is automated safety: certainly you can do automated pure capabilities. But I may have misunderstood that part of the question.