Technical Risks of (Lethal) Autonomous Weapons Systems

Link post

This whitepaper was written in response to the rolling text of the UN’s Group of Governmental Experts at the Convention on Certain Conventional Weapons found here.

Executive Summary

The autonomy and adaptability of (Lethal) Autonomous Weapons Systems, (L)AWS in short, promise unprecedented operational capabilities, but they also introduce profound risks that challenge the principles of control, accountability, and stability in international security. This report outlines the key technological risks associated with (L)AWS deployment, emphasizing their unpredictability, lack of transparency, and operational unreliability, which can lead to severe unintended consequences.

Key Takeaways

1. Proposed advantages of (L)AWS can only be achieved through objectification and classification, but a range of systematic risks limit the reliability and predictability of classifying algorithms.

2. These systematic risks include the black-box nature of AI decision-making, susceptibility to reward hacking, goal misgeneralization, and potential for emergent behaviors that escape human control.

3. (L)AWS could act in ways that are not just unexpected but also uncontrollable, undermining mission objectives and potentially escalating conflicts.

4. Even rigorously tested systems may behave unpredictably and harmfully in real-world conditions, jeopardizing both strategic stability and humanitarian principles.

Summary of Risks

No comments.