Towards mutually assured cooperation

The development of AI has the potential to alter the global balance of power. Most significantly, sufficiently powerful AI could enable some nations to achieve total dominance, prompting others to consider nuclear responses to prevent such scenarios. To manage the escalating risk of nuclear war as AI progresses, I propose that internationally cooperative development of AGI is a safe equilibrium point towards which global efforts should be directed.

A minimum viable argument

  • AGI when utilized for the benefit of anything other than all of humanity can be defined as weaponized AGI, a weapon of total military supremacy.

  • The exact point at which AI becomes a weapon of total power can only be recognized after it has happened[1].

  • To prevent total loss of power, countries will feel compelled to use military and nuclear force before an adversary achieves total supremacy.

  • The threshold for nuclear response is highly uncertain, more so than in previous conflicts. Despite the uncertainty, nations lacking assurances of their safety have legitimate reasons to consider striking.

  • The increased uncertainty about when to respond to AI threats complicates nuclear deterrence strategies. It encourages pre-emptive strikes but also diminishes the impact of isolated nuclear threats due to the lack of clear action thresholds.

  • The remaining option is to demand de-escalation of weaponization via universally beneficial steps towards international cooperation, for which many options exist[2].

  • Restricting AI development to centralized, transparent efforts under international oversight maximizes benefits and minimizes risks. The safe equilibrium state is one where maximal assurances are pursued that AGI benefits all of humanity equally.

Further considerations

  • Any leader in the weaponized AGI arms race eventually becomes the enemy of those without sufficient safety assurances. Initiating international cooperation early is the best way to provide these assurances.

  • The beneficiaries of AGI receive similar gains from weaponized and humanity-aligned AGI, but pursuing weaponized AGI has significantly higher risks in almost all scenarios.

  • Alliances that exclude significant portions of the world contribute to the instability and uncertainty of nuclear response thresholds. Aiming to benefit all of humanity is the equilibrium point at which global agreement can be found.

  • In disagreements over AGI development, transparent arrangements to allow for slowing or pausing AI development will be necessary to disarm nuclear risks.

Recommendations

Nations should pursue increasing and iteratively achievable demands for international cooperation in the development of AI. The escalating risk of mutually assured destruction and the non-existent benefits of pursuing weaponized AI should be understood and addressed early.

AI safety advocacy should focus on the disadvantages of weaponized AGI and the advantages of international cooperation, emphasizing progressive development towards global cooperation.

  1. ^
  2. ^