I definitely don’t find centralization inevitable. I have argued that the US government will very likely take control of AGI projects before they’re transformative. But I don’t think they’ll centralize them. Soft Nationalization: How the US Government Will Control AI Labs lists many legal ways the government could exert pressure and control on AGI labs. I think that still severely underestimates the potential for government control without nationalization. The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws. And prior to or failing all of that, the government can, probably will, and might already have some federal agent just show up and say “we just need to understand what’s happening and how important decisions are being made; nothing formal, we don’t want to have to take over and cause you trouble and a slowdown, so just keep us in the loop and there won’t be a problem”.
Taking control of matters of extreme national importance is the government’s job. It will do that as soon as it gets its collective head around how immense a deal AGI will be.
However, I don’t think they’ll centralize AGI, for two reasons: John Wentworth is very likely correct that it would slow down progress, probably a lot. Beauracracy does that. Second, the incoming administration believes this, whether or not it’s true.
A “manhattan project” would probably be soft government involvement, and just throwing more money into the race dynamics. That’s what would get us to AGI fastest.
However, see AE Studios arguments and evidence for conservative lawmakers being actually pretty receptive to x-risk arguments. But I have a hard time imagining Trump either being that cautiously inclined, even if he does believe in the risks to some degree; or keeping his meaty little fingers off of what is starting to look like maybe the most important project of our time.
So unfortunately I think the answer pretty clearly no, not during a Trump presidency.
“The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws.”
-> How likely do you think it is that there’s clear consensus on AGI being an extreme situation/at want point in the trajectory? I definitely agree that If there were consensus the USG would take action. But I’m kind of worried things will be messy and unclear and different groups will have different narratives etc
I think the question isn’t whether but when. AGI most obviously is a huge national security opportunity and risk. The closer we get to it, the more evidence there will be. And the more we talk about it, the more attention will be devoted to it by the national security apparatus.
The likely path to takeoff is relatively slow and continuous. People will get to talk to fully human-level entities before they’re smart enough to take over. Those people will recognize the potential of a new intelligent species in a visceral way that abstract arguments don’t provide.
That seems overconfident to me, but I hope you’re right!
To be clear: - I agree that it’s obviously a huge natsec opportunity and risk. - I agree the USG will be involved and that things other than nationalization are more likely - I am not confident that there will be consensus across the US on things like ‘AGI could lead to an intelligence explosion’, ‘an intelligence explosion could lead to a single actor taking over the world’, ‘a single actor taking over the world would be bad’.
Maybe it’s overconfident. I’m not even sure if I hope I’m right. A Trump Executive Branch, or anything close, in charge of AGI seems even worse than Sam Altman or similar setting themselves up as god-emperor.
The central premises here are
slow enough takeoff
likely but not certain
Sufficient intelligence in government
Politicians aren’t necessarily that smart or forward-looking
National security professionals are.
Visible takeoff- the public is made aware
This does seem more questionable.
But OpenAI is currently full of leaks; keeping human-level AGI secret long enough for it to take over the world before the national security apparatus knows seems really hard.
Outside of all that, could there be some sort of comedy of errors or massive collective and individual idiocy that prevents the government from doing its job in a very obvious (in retrospect at least) case?
Yeah, it’s possible. History, people, and organizations are complex and weird.
Here’s a separate comment for a separate point:
I definitely don’t find centralization inevitable. I have argued that the US government will very likely take control of AGI projects before they’re transformative. But I don’t think they’ll centralize them. Soft Nationalization: How the US Government Will Control AI Labs lists many legal ways the government could exert pressure and control on AGI labs. I think that still severely underestimates the potential for government control without nationalization. The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws. And prior to or failing all of that, the government can, probably will, and might already have some federal agent just show up and say “we just need to understand what’s happening and how important decisions are being made; nothing formal, we don’t want to have to take over and cause you trouble and a slowdown, so just keep us in the loop and there won’t be a problem”.
Taking control of matters of extreme national importance is the government’s job. It will do that as soon as it gets its collective head around how immense a deal AGI will be.
However, I don’t think they’ll centralize AGI, for two reasons: John Wentworth is very likely correct that it would slow down progress, probably a lot. Beauracracy does that. Second, the incoming administration believes this, whether or not it’s true.
A “manhattan project” would probably be soft government involvement, and just throwing more money into the race dynamics. That’s what would get us to AGI fastest.
However, see AE Studios arguments and evidence for conservative lawmakers being actually pretty receptive to x-risk arguments. But I have a hard time imagining Trump either being that cautiously inclined, even if he does believe in the risks to some degree; or keeping his meaty little fingers off of what is starting to look like maybe the most important project of our time.
So unfortunately I think the answer pretty clearly no, not during a Trump presidency.
“The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws.”
-> How likely do you think it is that there’s clear consensus on AGI being an extreme situation/at want point in the trajectory? I definitely agree that If there were consensus the USG would take action. But I’m kind of worried things will be messy and unclear and different groups will have different narratives etc
I think the question isn’t whether but when. AGI most obviously is a huge national security opportunity and risk. The closer we get to it, the more evidence there will be. And the more we talk about it, the more attention will be devoted to it by the national security apparatus.
The likely path to takeoff is relatively slow and continuous. People will get to talk to fully human-level entities before they’re smart enough to take over. Those people will recognize the potential of a new intelligent species in a visceral way that abstract arguments don’t provide.
That seems overconfident to me, but I hope you’re right!
To be clear:
- I agree that it’s obviously a huge natsec opportunity and risk.
- I agree the USG will be involved and that things other than nationalization are more likely
- I am not confident that there will be consensus across the US on things like ‘AGI could lead to an intelligence explosion’, ‘an intelligence explosion could lead to a single actor taking over the world’, ‘a single actor taking over the world would be bad’.
Maybe it’s overconfident. I’m not even sure if I hope I’m right. A Trump Executive Branch, or anything close, in charge of AGI seems even worse than Sam Altman or similar setting themselves up as god-emperor.
The central premises here are
slow enough takeoff
likely but not certain
Sufficient intelligence in government
Politicians aren’t necessarily that smart or forward-looking
National security professionals are.
Visible takeoff- the public is made aware
This does seem more questionable.
But OpenAI is currently full of leaks; keeping human-level AGI secret long enough for it to take over the world before the national security apparatus knows seems really hard.
Outside of all that, could there be some sort of comedy of errors or massive collective and individual idiocy that prevents the government from doing its job in a very obvious (in retrospect at least) case?
Yeah, it’s possible. History, people, and organizations are complex and weird.