I agree that this might happen too fast to develop a manhattan project, but do you really see a way the government fails to even seize effective control of AGI once it’s developed? It’s pretty much their job to manage huge security concerns like the one AGI presents, even if they kept their hands off the immense economic potential. The scenarios in which the government just politely stands aside or doesn’t notice even when they see human-level systems with their own eyes seem highly unlikely to me.
Seizing control of a project while it’s taking off from roughly the human to superhuman level isn’t as good as taking control of the compute to build it, but it’s better than nothing, and it feels like the type of move governments often make. They don’t even need to be public about it, just show up and say “hey let’s work together so nobody needs to discuss laws around sharing security-breaking technology with our enemies”
It depends on how much time there is between the first impactful demonstration of long-horizon task capabilities (doing many jobs) and commoditization of research capable TAI, even with governments waking up during this interval and working to extend it. It might be that by default this is already at least few years, and if the bulk of compute is seized, it extends to even longer. This seems to require long-horizon task capabilities to be found at the limits of scaling, and TAI significantly further.
But we don’t know until it’s tried if even a $3 billion training run won’t already enable long-horizon task capabilities (with appropriate post-training, even if it arrives a bit later), and we don’t know if the first long-horizon task capable AI won’t immediately be capable of research, with no need for further scaling (even if it helps). And if it’s not immediately obvious how to elicit these capabilities with post-training, there will be an overhang of sufficient compute and sufficiently strong base models in many places before the alarm is sounded. If enough of such things align, there won’t be time for anyone to prevent prompt commoditization of research capable TAI. And then there’s ASI 1-2 years later, with the least possible time for anyone to steer any of this.
I agree that this might happen too fast to develop a manhattan project, but do you really see a way the government fails to even seize effective control of AGI once it’s developed? It’s pretty much their job to manage huge security concerns like the one AGI presents, even if they kept their hands off the immense economic potential. The scenarios in which the government just politely stands aside or doesn’t notice even when they see human-level systems with their own eyes seem highly unlikely to me.
Seizing control of a project while it’s taking off from roughly the human to superhuman level isn’t as good as taking control of the compute to build it, but it’s better than nothing, and it feels like the type of move governments often make. They don’t even need to be public about it, just show up and say “hey let’s work together so nobody needs to discuss laws around sharing security-breaking technology with our enemies”
It depends on how much time there is between the first impactful demonstration of long-horizon task capabilities (doing many jobs) and commoditization of research capable TAI, even with governments waking up during this interval and working to extend it. It might be that by default this is already at least few years, and if the bulk of compute is seized, it extends to even longer. This seems to require long-horizon task capabilities to be found at the limits of scaling, and TAI significantly further.
But we don’t know until it’s tried if even a $3 billion training run won’t already enable long-horizon task capabilities (with appropriate post-training, even if it arrives a bit later), and we don’t know if the first long-horizon task capable AI won’t immediately be capable of research, with no need for further scaling (even if it helps). And if it’s not immediately obvious how to elicit these capabilities with post-training, there will be an overhang of sufficient compute and sufficiently strong base models in many places before the alarm is sounded. If enough of such things align, there won’t be time for anyone to prevent prompt commoditization of research capable TAI. And then there’s ASI 1-2 years later, with the least possible time for anyone to steer any of this.