Something that confuses me about this type of model: for humans to be willing to delegate ~100% of AI research to pre-AGIs, that implies a very high degree of trust in their systems. Especially given that over time, a larger and larger share of the “things AI still can’t do” are “produce an output that a human trusts enough not to need to review”.
But if we’ve solved the “trust” bottleneck for pre-AGI systems, is that not equivalent to having basically enabled automated alignment research? In what ways is AGI alignment different from just-barely-pre-AGI alignment, which is itself a necessary precondition for ~100% of AI-research being automated?
Unless in your model AI research is entirely about “making the gradients flow more”, and not at all about things like finding new failure modes (eg honesty, difficulty-to-verify-explanations), characterizing them, and then proposing principled solutions. Which seems wrong to me—I expect those to be the primary bottlenecks to genuine automation.
This is the terrifying tradeoff, that delaying for months after reaching near-human-level AI (if there is safety research that requires studying AI around there or beyond) is plausibly enough time for a capabilities explosion (yielding arbitrary economic and military advantage, or AI takeover) by a more reckless actor willing to accept a larger level of risk, or making an erroneous/biased risk estimate. AI models selected to yield results while under control that catastrophically take over when they are collectively capable would look like automating everything was largely going fine (absent vigorous probes) until it doesn’t, and mistrust could seem like paranoia.
I agree that the final tasks that humans do may look like “check that you understand and trust the work the AIs have done”, and that a lack of trust is a plausible bottleneck to full automation of AI research.
I don’t think the only way for humans at AI labs to get that trust is to automate alignment research, though that is one way. Human-conducted alignment research might lead them to trust AIs, or they might have a large amount of trust in the AIs’ work without believing they are aligned. E.g. they separate the workflow into lots of narrow tasks that can be done by a variety of non-agentic AIs that they don’t think pose a risk; or they set up a system of checks and balances (where different AIs check each other’s work and look for signs of deception) that they trust despite thinking certain AIs may be unaligned, they do such extensive adversarial training that they’re confident that the AIs would never actual try to do anything deceptive in practice (perhaps because they’re paranoid that a seeming opportunity to trick humans is just a human-designed test of their alignment). TBC, I think “being confident that the AIs are aligned” is better and more likely than these alternative routes to trusting the work.
Also, when I’m forecasting AI capabilities i’m forecasting AI that could readily automate 100% of AI R&D, not AI that actually does automate it. If trust was the only factor preventing full automation, that could count as AI that could readily automate 100%.
Also, they might let the AIs proceed with the research anyway even though they don’t trust that they are aligned, or they might erroneously trust that they are aligned due to deception. If this sounds irresponsible to you, well, welcome to Earth.
Why would AGI research be anything other than recursion.
We make a large benchmark of automatically gradeable cognitive tasks. Things like “solve all these multiple choice tests” from some enormous set of every test given in every program at an institution willing to share.
“Control this simulated robot and diagnose and repair these simulated machines”
“Control this simulated robot and beat Minecraft”
“Control this simulated robot and wash all the dishes”
And so on and so forth.
Anyways, some tasks would be “complete all the auto gradeable coursework for this program of study in AI” and “using this table of information about prior attempts, design a better AGI to pass this test”.
We want the machine to have generality—use information it learned from one task on others—and to perform well on all the tasks, and to make efficient use of compute.
So the scoring heuristic would reflect that.
The “efficient use of compute” would select for models that don’t have time to deceive, so it might in fact be safe.
Something that confuses me about this type of model: for humans to be willing to delegate ~100% of AI research to pre-AGIs, that implies a very high degree of trust in their systems. Especially given that over time, a larger and larger share of the “things AI still can’t do” are “produce an output that a human trusts enough not to need to review”.
But if we’ve solved the “trust” bottleneck for pre-AGI systems, is that not equivalent to having basically enabled automated alignment research? In what ways is AGI alignment different from just-barely-pre-AGI alignment, which is itself a necessary precondition for ~100% of AI-research being automated?
Unless in your model AI research is entirely about “making the gradients flow more”, and not at all about things like finding new failure modes (eg honesty, difficulty-to-verify-explanations), characterizing them, and then proposing principled solutions. Which seems wrong to me—I expect those to be the primary bottlenecks to genuine automation.
This is the terrifying tradeoff, that delaying for months after reaching near-human-level AI (if there is safety research that requires studying AI around there or beyond) is plausibly enough time for a capabilities explosion (yielding arbitrary economic and military advantage, or AI takeover) by a more reckless actor willing to accept a larger level of risk, or making an erroneous/biased risk estimate. AI models selected to yield results while under control that catastrophically take over when they are collectively capable would look like automating everything was largely going fine (absent vigorous probes) until it doesn’t, and mistrust could seem like paranoia.
I agree that the final tasks that humans do may look like “check that you understand and trust the work the AIs have done”, and that a lack of trust is a plausible bottleneck to full automation of AI research.
I don’t think the only way for humans at AI labs to get that trust is to automate alignment research, though that is one way. Human-conducted alignment research might lead them to trust AIs, or they might have a large amount of trust in the AIs’ work without believing they are aligned. E.g. they separate the workflow into lots of narrow tasks that can be done by a variety of non-agentic AIs that they don’t think pose a risk; or they set up a system of checks and balances (where different AIs check each other’s work and look for signs of deception) that they trust despite thinking certain AIs may be unaligned, they do such extensive adversarial training that they’re confident that the AIs would never actual try to do anything deceptive in practice (perhaps because they’re paranoid that a seeming opportunity to trick humans is just a human-designed test of their alignment). TBC, I think “being confident that the AIs are aligned” is better and more likely than these alternative routes to trusting the work.
Also, when I’m forecasting AI capabilities i’m forecasting AI that could readily automate 100% of AI R&D, not AI that actually does automate it. If trust was the only factor preventing full automation, that could count as AI that could readily automate 100%.
Also, they might let the AIs proceed with the research anyway even though they don’t trust that they are aligned, or they might erroneously trust that they are aligned due to deception. If this sounds irresponsible to you, well, welcome to Earth.
Why would AGI research be anything other than recursion.
We make a large benchmark of automatically gradeable cognitive tasks. Things like “solve all these multiple choice tests” from some enormous set of every test given in every program at an institution willing to share.
“Control this simulated robot and diagnose and repair these simulated machines”
“Control this simulated robot and beat Minecraft”
“Control this simulated robot and wash all the dishes”
And so on and so forth.
Anyways, some tasks would be “complete all the auto gradeable coursework for this program of study in AI” and “using this table of information about prior attempts, design a better AGI to pass this test”.
We want the machine to have generality—use information it learned from one task on others—and to perform well on all the tasks, and to make efficient use of compute.
So the scoring heuristic would reflect that.
The “efficient use of compute” would select for models that don’t have time to deceive, so it might in fact be safe.