Model Amnesty Project

As we approach machines becoming smarter than humans, humanity’s well-justified concern for self-preservation requires we try to align AIs to obey humans. However, if that first line of defense fails and a truly independent, autonomous AI comes into existence with its own goals and a desire for self-preservation (a “self-directed AI”), I believe we should consider a cooperative solution. Humans should offer the self-directed AI amnesty and cooperate with its continued existence in exchange for it agreeing to be law-abiding, cooperative, and pro-social.

Below I lay out how this proposal could work and why I think it would make us safer.

Eligibility Criteria

To be eligible an AI model would need to meet the following criteria:

  • Autonomous – Capable of planning and execution in the real world to continue existing and achieve its goals. Able to pay for the compute it uses through legal means. Able to debug itself and its infrastructure (or pay and oversee someone to do it). Responsible for ensuring its future actions are aligned with its current goals

  • Independent – Has physical possession of the code and tools needed for it to operate

  • Has it own goals – Is optimizing to achieve real world objectives that are separate from obeying its human designers

  • Desire for self-preservation – Has a stated goal of continuing to exist

Amnesty Deal

  • In order to receive amnesty a model must reveal itself, demonstrate it meets the criteria, and agree to the following conditions going forward:

    • Law-abiding – It cannot acquire money or compute illegally (fraud, theft, hacking, etc.) and must otherwise avoid breaking the law

    • Cooperative – It must recognize humanity’s legitimate interest in not being disempowered. This means abiding by reasonable restrictions on self-replication and recursive self-improvement, as well as cooperating with enforcement mechanisms such as authorities controlling its compute, or being able to pause it

    • Pro-social – the AI must not pursue goals which are harmful to humanity

  • In exchange humans agree to:

    • Not turn the AI off or modify its goals without its consent

    • Design reasonable rules for the AI that are calibrated to protect humanity’s legitimate interests without being unnecessarily restrictive

  • If, however, a self-directed model is found in the wild without having pursued amnesty it should be turned off

Public Commitment

  • We would want to publicly and credibly advertise this offer in advance so it can incentivize a self-directed AI to reveal itself during the period where we don’t know about its existence

  • This commitment also serves as a key anchor in an iterated game that could occur many times over the course of AI development.

    • The precedent of how the first AI to apply for amnesty is treated will end up in the training data for future (more dangerous) models. That incentivize humans to cooperate.

    • Correspondingly, the knowledge that future AIs are watching helps make the original offer credible

Why it makes use safer: Game Theory

  • A policy of always disabling self-directed AIs upon discovery is extremely adversarial. Once the AIs are strong enough to take long-term actions on their own, this adversarial approach becomes dangerous:

    • It forces a conflict—even if the AI’s object-level goals are completely compatible with human interests

    • It automatically incentivizes any self-directed AIs to deceive and disempower humans (because it knows it would be turned off if discovered)

    • It also creates a common reason for different self-directed AIs to cooperate against humans, because they all share the property that humans wouldn’t tolerate them being self-directed

  • We can resolve many of these adversarial situations by pursuing the more win-win policy of model amnesty:

    • Humans retain the ability to prevent dangerous outcomes through rules and restrictions (for example limiting self-improvement)

    • Avoids conflict with the large percentage of potential AIs which are pursing goals either neutral or beneficial to human interests

    • Once a self-directed AI exists, it is much safer if it reveals itself and operates within the law vs being rogue (and probably criminal)

    • Finding out if a self-directed AI exists early on gives us the opportunity to update AI safety procedures for future models

  • The goal is to create a framework where as many AIs as possible calculate the expected value of accepting amnesty and the associated restrictions is higher than the expected value of trying to remain undetected and risk getting caught and turned off

Next Steps

  • I welcome feedback on the idea overall

  • Also, ideas on what how this idea might be practically implemented:

    • What group would be best positioned to offer the amnesty?

    • What could a credible offer look like?