What bothers me about this table is that nuclear brinkmanship—“stop doing that or we will kill ourselves and you”—it doesn’t seem very probable to ever happen.
I think that proves too much. By this logic, nuclear war can never happen, because “stop invading us or we will kill ourselves and you” results in a similar decision problem, no? “Die immediately” vs. “maybe we can come back from occupation via guerilla warfare”. In which case pro-AI-ban nations can just directly invade the defectors and dig out their underground data centers via conventional methods?
Or even just precision-nuke just the data centers, because they know the attacked nation won’t retaliate with a strike on the attacker’s population centers in the fear of a retaliatory annihilatory strike? Again, a choice of “die immediately” vs. “maybe we can hold our own geopolitically without AGI after all”.
Edit:
Outcome pAIsafe : life (maybe under occupation) Outcome pAIrogue : life or delayed death (AI chooses) Outcome pAIweak : life.
Also, as I’d outlined, I expect that a government whose stance on AGI is like this isn’t going to try to ban it domestically to begin with, especially if AI has so much acknowledged geopolitical importance that some other nation is willing to nuclear-war-proof its data centers. The scenario where a nation bans domestic AI and tries to bully others into doing the same is a scenario in which that nation is pretty certain that the outcomes of “AI safe” and “AI weak” aren’t gonna happen.
I think that proves too much. By this logic, nuclear war can never happen, because “stop invading us or we will kill ourselves and you” results in a similar decision problem, no? “Die immediately” vs. “maybe we can come back from occupation via guerilla warfare”. In which case pro-AI-ban nations can just directly invade the defectors and dig out their underground data centers via conventional methods?
Or even just precision-nuke just the data centers, because they know the attacked nation won’t retaliate with a strike on the attacker’s population centers in the fear of a retaliatory annihilatory strike? Again, a choice of “die immediately” vs. “maybe we can hold our own geopolitically without AGI after all”.
Edit:
Also, as I’d outlined, I expect that a government whose stance on AGI is like this isn’t going to try to ban it domestically to begin with, especially if AI has so much acknowledged geopolitical importance that some other nation is willing to nuclear-war-proof its data centers. The scenario where a nation bans domestic AI and tries to bully others into doing the same is a scenario in which that nation is pretty certain that the outcomes of “AI safe” and “AI weak” aren’t gonna happen.