“The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws.”
-> How likely do you think it is that there’s clear consensus on AGI being an extreme situation/at want point in the trajectory? I definitely agree that If there were consensus the USG would take action. But I’m kind of worried things will be messy and unclear and different groups will have different narratives etc
I think the question isn’t whether but when. AGI most obviously is a huge national security opportunity and risk. The closer we get to it, the more evidence there will be. And the more we talk about it, the more attention will be devoted to it by the national security apparatus.
The likely path to takeoff is relatively slow and continuous. People will get to talk to fully human-level entities before they’re smart enough to take over. Those people will recognize the potential of a new intelligent species in a visceral way that abstract arguments don’t provide.
That seems overconfident to me, but I hope you’re right!
To be clear: - I agree that it’s obviously a huge natsec opportunity and risk. - I agree the USG will be involved and that things other than nationalization are more likely - I am not confident that there will be consensus across the US on things like ‘AGI could lead to an intelligence explosion’, ‘an intelligence explosion could lead to a single actor taking over the world’, ‘a single actor taking over the world would be bad’.
Maybe it’s overconfident. I’m not even sure if I hope I’m right. A Trump Executive Branch, or anything close, in charge of AGI seems even worse than Sam Altman or similar setting themselves up as god-emperor.
The central premises here are
slow enough takeoff
likely but not certain
Sufficient intelligence in government
Politicians aren’t necessarily that smart or forward-looking
National security professionals are.
Visible takeoff- the public is made aware
This does seem more questionable.
But OpenAI is currently full of leaks; keeping human-level AGI secret long enough for it to take over the world before the national security apparatus knows seems really hard.
Outside of all that, could there be some sort of comedy of errors or massive collective and individual idiocy that prevents the government from doing its job in a very obvious (in retrospect at least) case?
Yeah, it’s possible. History, people, and organizations are complex and weird.
“The government can and has simply exerted emergency powers in extreme situations. Developing AGI, properly understood, is definitely an extreme situation. If that were somehow ruled an executive overreach, congress can simply pass new laws.”
-> How likely do you think it is that there’s clear consensus on AGI being an extreme situation/at want point in the trajectory? I definitely agree that If there were consensus the USG would take action. But I’m kind of worried things will be messy and unclear and different groups will have different narratives etc
I think the question isn’t whether but when. AGI most obviously is a huge national security opportunity and risk. The closer we get to it, the more evidence there will be. And the more we talk about it, the more attention will be devoted to it by the national security apparatus.
The likely path to takeoff is relatively slow and continuous. People will get to talk to fully human-level entities before they’re smart enough to take over. Those people will recognize the potential of a new intelligent species in a visceral way that abstract arguments don’t provide.
That seems overconfident to me, but I hope you’re right!
To be clear:
- I agree that it’s obviously a huge natsec opportunity and risk.
- I agree the USG will be involved and that things other than nationalization are more likely
- I am not confident that there will be consensus across the US on things like ‘AGI could lead to an intelligence explosion’, ‘an intelligence explosion could lead to a single actor taking over the world’, ‘a single actor taking over the world would be bad’.
Maybe it’s overconfident. I’m not even sure if I hope I’m right. A Trump Executive Branch, or anything close, in charge of AGI seems even worse than Sam Altman or similar setting themselves up as god-emperor.
The central premises here are
slow enough takeoff
likely but not certain
Sufficient intelligence in government
Politicians aren’t necessarily that smart or forward-looking
National security professionals are.
Visible takeoff- the public is made aware
This does seem more questionable.
But OpenAI is currently full of leaks; keeping human-level AGI secret long enough for it to take over the world before the national security apparatus knows seems really hard.
Outside of all that, could there be some sort of comedy of errors or massive collective and individual idiocy that prevents the government from doing its job in a very obvious (in retrospect at least) case?
Yeah, it’s possible. History, people, and organizations are complex and weird.