The development of AI has the potential to alter the global balance of power. Most significantly, sufficiently powerful AI could enable some nations to achieve total dominance (or existential catastrophe via loss of control), prompting others to consider nuclear responses to prevent such scenarios. To manage the escalating risk of nuclear war as AI progresses, I propose that internationally cooperative development of AGI is a safe equilibrium point towards which global efforts should be directed.
A minimum viable argument
The exact point at which AI becomes a weapon that enables total domination can only be recognized after it has happened[1].
To prevent total loss of power, countries will feel compelled to use military and nuclear force before an adversary achieves total supremacy.
The threshold for nuclear response is highly uncertain, more so than in previous conflicts. Despite the uncertainty, nations lacking assurances of their safety have legitimate reasons to consider striking.
The increased uncertainty about when to respond to AI threats complicates nuclear deterrence strategies. It encourages pre-emptive strikes but also diminishes the impact of isolated nuclear threats due to the lack of clear action thresholds.
The remaining option is to demand de-escalation of AI development via universally beneficial steps towards international cooperation, for which many options exist[2].
Restricting AI development to centralized, transparent efforts under international oversight maximizes benefits and minimizes risks. The safe equilibrium state is one where maximal assurances are pursued that AGI benefits all.
Further considerations
Any leader in the AGI arms race eventually becomes the enemy of those without sufficient safety assurances. Initiating international cooperation early is the best way to provide these assurances.
The beneficiaries of AGI receive similar gains from weaponized and universally-aligned AGI, but pursuing weaponized AGI has significantly higher risks in almost all scenarios.
Alliances that exclude significant portions of the world contribute to the instability and uncertainty of nuclear response thresholds.
In disagreements over AGI development, transparent arrangements to allow for slowing or pausing AI development will be necessary to disarm nuclear risks.
Recommendations
Nations should pursue increasing and iteratively achievable demands for international cooperation in the development of AI. The escalating risk of mutually assured destruction and the non-existent benefits of pursuing weaponized AI should be understood and addressed early.
AI safety advocacy should focus on the disadvantages of weaponized AGI and the advantages of international cooperation, emphasizing progressive development towards global cooperation.
Towards mutually assured cooperation
The development of AI has the potential to alter the global balance of power. Most significantly, sufficiently powerful AI could enable some nations to achieve total dominance (or existential catastrophe via loss of control), prompting others to consider nuclear responses to prevent such scenarios. To manage the escalating risk of nuclear war as AI progresses, I propose that internationally cooperative development of AGI is a safe equilibrium point towards which global efforts should be directed.
A minimum viable argument
The exact point at which AI becomes a weapon that enables total domination can only be recognized after it has happened[1].
To prevent total loss of power, countries will feel compelled to use military and nuclear force before an adversary achieves total supremacy.
The threshold for nuclear response is highly uncertain, more so than in previous conflicts. Despite the uncertainty, nations lacking assurances of their safety have legitimate reasons to consider striking.
The increased uncertainty about when to respond to AI threats complicates nuclear deterrence strategies. It encourages pre-emptive strikes but also diminishes the impact of isolated nuclear threats due to the lack of clear action thresholds.
The remaining option is to demand de-escalation of AI development via universally beneficial steps towards international cooperation, for which many options exist[2].
Restricting AI development to centralized, transparent efforts under international oversight maximizes benefits and minimizes risks. The safe equilibrium state is one where maximal assurances are pursued that AGI benefits all.
Further considerations
Any leader in the AGI arms race eventually becomes the enemy of those without sufficient safety assurances. Initiating international cooperation early is the best way to provide these assurances.
The beneficiaries of AGI receive similar gains from weaponized and universally-aligned AGI, but pursuing weaponized AGI has significantly higher risks in almost all scenarios.
Alliances that exclude significant portions of the world contribute to the instability and uncertainty of nuclear response thresholds.
In disagreements over AGI development, transparent arrangements to allow for slowing or pausing AI development will be necessary to disarm nuclear risks.
Recommendations
Nations should pursue increasing and iteratively achievable demands for international cooperation in the development of AI. The escalating risk of mutually assured destruction and the non-existent benefits of pursuing weaponized AI should be understood and addressed early.
AI safety advocacy should focus on the disadvantages of weaponized AGI and the advantages of international cooperation, emphasizing progressive development towards global cooperation.
There’s No Fire Alarm for Artificial General Intelligence
For example, Effective Mitigations for Systemic Risks from General-Purpose AI