Interesting analysis! I think it’ll be useful for more folks to think about the nuclear/geopolitical implications of AGI development, especially in worlds where governments are paying more attention & one or more nuclear powers experience a “wakeup” or “sudden increase in situational awareness.”
Some specific thoughts:
Of these two risks, it is likely simpler to work to reduce the risk of failure to navigate.
Can you say more about why you believe this? At first glance, it seems to be like “fundamental instability” is much more tied to how AI development goes, so I would’ve expected it to be more tractable [among LW users]. Whereas “failure to navigate” seems further outside our spheres of influence– it seems to me like there would be a lot of intelligence agency analysts, defense people, and national security advisors who are contributing to discussions about whether or not to go to war. Seems plausible that maybe a well-written analysis from folks in the AI safety community could be useful, but my impression is that it would be pretty hard to make a splash here since (a) things would be so fast-moving, (b) a lot of the valuable information about the geopolitical scene will be held by people working in government and people with security clearances, making it harder for outside people to reason about things, and (c) even conditional on valuable analysis, the stakeholders who will be deferred to are (mostly) going to be natsec/defense stakeholders.
3) In the aftermath of a nuclear war, surviving powers would be more fearful and hostile.
4) There would be greater incentives to rush for powerful AI, and less effort expended on going carefully or considering pausing.
There are lots of common-sense reasons why nuclear war is bad. That said, I’d be curious to learn more about how confident you are in these statements. In a post-catastrophe world, it seems quite plausible to me that the rebounding civilizations would fear existential catastrophes and dangerous technologies and try hard to avoid technology-induced catastrophes. I also just think such scenarios are very hard to reason about, such that there’s a lot of uncertainty around whether AI progress would be faster (bc civs are fearful of each other and hostile) or slower (because civs are are fearful of technology-induced catastrophes and generally have more of a safety/security mindset.)
Can you say more about why you believe this? At first glance, it seems to be like “fundamental instability” is much more tied to how AI development goes, so I would’ve expected it to be more tractable [among LW users].
Maybe “simpler” was the wrong choice of word. I didn’t really mean “more tractable”. I just meant “it’s kind of obvious what needs to happen (even if it’s very hard to get it to happen)”. Whereas with fundamental instability it’s more like it’s unclear if it’s actually a very overdetermined fundamental instability, or what exactly could nudge it to a part of scenario space with stable possibilities.
In a post-catastrophe world, it seems quite plausible to me that the rebounding civilizations would fear existential catastrophes and dangerous technologies and try hard to avoid technology-induced catastrophes.
I agree that it’s hard to reason about this stuff so I’m not super confident in anything. However, my inside view is that this story seems plausible if the catastrophe seems like it was basically an accident, but less plausible for nuclear war. Somewhat more plausible is that rebounding civilizations would create a meaningful world government to avoid repeating history.
Interesting analysis! I think it’ll be useful for more folks to think about the nuclear/geopolitical implications of AGI development, especially in worlds where governments are paying more attention & one or more nuclear powers experience a “wakeup” or “sudden increase in situational awareness.”
Some specific thoughts:
Can you say more about why you believe this? At first glance, it seems to be like “fundamental instability” is much more tied to how AI development goes, so I would’ve expected it to be more tractable [among LW users]. Whereas “failure to navigate” seems further outside our spheres of influence– it seems to me like there would be a lot of intelligence agency analysts, defense people, and national security advisors who are contributing to discussions about whether or not to go to war. Seems plausible that maybe a well-written analysis from folks in the AI safety community could be useful, but my impression is that it would be pretty hard to make a splash here since (a) things would be so fast-moving, (b) a lot of the valuable information about the geopolitical scene will be held by people working in government and people with security clearances, making it harder for outside people to reason about things, and (c) even conditional on valuable analysis, the stakeholders who will be deferred to are (mostly) going to be natsec/defense stakeholders.
There are lots of common-sense reasons why nuclear war is bad. That said, I’d be curious to learn more about how confident you are in these statements. In a post-catastrophe world, it seems quite plausible to me that the rebounding civilizations would fear existential catastrophes and dangerous technologies and try hard to avoid technology-induced catastrophes. I also just think such scenarios are very hard to reason about, such that there’s a lot of uncertainty around whether AI progress would be faster (bc civs are fearful of each other and hostile) or slower (because civs are are fearful of technology-induced catastrophes and generally have more of a safety/security mindset.)
Maybe “simpler” was the wrong choice of word. I didn’t really mean “more tractable”. I just meant “it’s kind of obvious what needs to happen (even if it’s very hard to get it to happen)”. Whereas with fundamental instability it’s more like it’s unclear if it’s actually a very overdetermined fundamental instability, or what exactly could nudge it to a part of scenario space with stable possibilities.
I agree that it’s hard to reason about this stuff so I’m not super confident in anything. However, my inside view is that this story seems plausible if the catastrophe seems like it was basically an accident, but less plausible for nuclear war. Somewhat more plausible is that rebounding civilizations would create a meaningful world government to avoid repeating history.