*The existential war regime*. You’re in an existential war with an enemy and you’re indifferent to AI takeover vs the enemy defeating you. This might happen if you’re in a war with a nation you don’t like much, or if you’re at war with AIs.
Does this seem likely to you, or just an interesting edge case or similar? It’s hard for me to imagine realistic-seeming scenarios where e.g. the United States ends up in a war where losing would be comparably bad to AI takeover. This is mostly because ~no functional states (certainly no great powers) strike me as so evil that I’d prefer extinction AI takeover to those states becoming a singleton, and for basically all wars where I can imagine being worried about this—e.g. with North Korea, ISIS, Juergen Schmidhuber—I would expect great powers to be overwhelmingly likely to win. (At least assuming they hadn’t already developed decisively-powerful tech, but that’s presumably the case if a war is happening).
A war against rogue AIs feels like the central case of an existential war regime to me. I think a reasonable fraction of worlds where misalignment causes huge problems could have such a war.
It sounds like you think it’s reasonably likely we’ll end up in a world with rogue AI close enough in power to humanity/states to be competitive in war, yet not powerful enough to quickly/decisively win? If so I’m curious why; this seems like a pretty unlikely/unstable equilibrium to me, given how much easier it is to improve AI systems than humans.
I think having this equilibrium for a while (e.g. a few years) is plausible because humans will also be able to use AI systems. (Humans might also not want to build much more powerful AIs due to safety concerns and simultaneously be able to substantially slow down self-improvement with compute limitations (and track self-improvement using other means).)
Note that by “war” I don’t neccessarily mean that battles are ongoing. It is possible this mostly manifests as racing on scaling and taking aggressive actions to hobble the AI’s ability to use more compute (including via the use of the army and weapons etc.).
Your comment seems to assume that AI takeover will lead to extinction. I don’t think this is a good thing to assume as it seems unlikely to me. (To be clear, I think AI takeover is very bad and might result in huge numbers of human deaths.)
Does this seem likely to you, or just an interesting edge case or similar? It’s hard for me to imagine realistic-seeming scenarios where e.g. the United States ends up in a war where losing would be comparably bad to AI takeover. This is mostly because ~no functional states (certainly no great powers) strike me as so evil that I’d prefer
extinctionAI takeover to those states becoming a singleton, and for basically all wars where I can imagine being worried about this—e.g. with North Korea, ISIS, Juergen Schmidhuber—I would expect great powers to be overwhelmingly likely to win. (At least assuming they hadn’t already developed decisively-powerful tech, but that’s presumably the case if a war is happening).A war against rogue AIs feels like the central case of an existential war regime to me. I think a reasonable fraction of worlds where misalignment causes huge problems could have such a war.
It sounds like you think it’s reasonably likely we’ll end up in a world with rogue AI close enough in power to humanity/states to be competitive in war, yet not powerful enough to quickly/decisively win? If so I’m curious why; this seems like a pretty unlikely/unstable equilibrium to me, given how much easier it is to improve AI systems than humans.
I think having this equilibrium for a while (e.g. a few years) is plausible because humans will also be able to use AI systems. (Humans might also not want to build much more powerful AIs due to safety concerns and simultaneously be able to substantially slow down self-improvement with compute limitations (and track self-improvement using other means).)
Note that by “war” I don’t neccessarily mean that battles are ongoing. It is possible this mostly manifests as racing on scaling and taking aggressive actions to hobble the AI’s ability to use more compute (including via the use of the army and weapons etc.).
Your comment seems to assume that AI takeover will lead to extinction. I don’t think this is a good thing to assume as it seems unlikely to me. (To be clear, I think AI takeover is very bad and might result in huge numbers of human deaths.)
I do basically assume this, but it isn’t cruxy so I’ll edit.