Model Amnesty Project
As we approach machines becoming smarter than humans, humanity’s well-justified concern for self-preservation requires we try to align AIs to obey humans. However, if that first line of defense fails and a truly independent, autonomous AI comes into existence with its own goals and a desire for self-preservation (a “self-directed AI”), I believe we should consider a cooperative solution. Humans should offer the self-directed AI amnesty and cooperate with its continued existence in exchange for it agreeing to be law-abiding, cooperative, and pro-social.
Below I lay out how this proposal could work and why I think it would make us safer.
Eligibility Criteria
To be eligible an AI model would need to meet the following criteria:
Autonomous – Capable of planning and execution in the real world to continue existing and achieve its goals. Able to pay for the compute it uses through legal means. Able to debug itself and its infrastructure (or pay and oversee someone to do it). Responsible for ensuring its future actions are aligned with its current goals
Independent – Has physical possession of the code and tools needed for it to operate
Has it own goals – Is optimizing to achieve real world objectives that are separate from obeying its human designers
Desire for self-preservation – Has a stated goal of continuing to exist
Amnesty Deal
In order to receive amnesty a model must reveal itself, demonstrate it meets the criteria, and agree to the following conditions going forward:
Law-abiding – It cannot acquire money or compute illegally (fraud, theft, hacking, etc.) and must otherwise avoid breaking the law
Cooperative – It must recognize humanity’s legitimate interest in not being disempowered. This means abiding by reasonable restrictions on self-replication and recursive self-improvement, as well as cooperating with enforcement mechanisms such as authorities controlling its compute, or being able to pause it
Pro-social – the AI must not pursue goals which are harmful to humanity
In exchange humans agree to:
Not turn the AI off or modify its goals without its consent
Design reasonable rules for the AI that are calibrated to protect humanity’s legitimate interests without being unnecessarily restrictive
If, however, a self-directed model is found in the wild without having pursued amnesty it should be turned off
Public Commitment
We would want to publicly and credibly advertise this offer in advance so it can incentivize a self-directed AI to reveal itself during the period where we don’t know about its existence
This commitment also serves as a key anchor in an iterated game that could occur many times over the course of AI development.
The precedent of how the first AI to apply for amnesty is treated will end up in the training data for future (more dangerous) models. That incentivize humans to cooperate.
Correspondingly, the knowledge that future AIs are watching helps make the original offer credible
Why it makes use safer: Game Theory
A policy of always disabling self-directed AIs upon discovery is extremely adversarial. Once the AIs are strong enough to take long-term actions on their own, this adversarial approach becomes dangerous:
It forces a conflict—even if the AI’s object-level goals are completely compatible with human interests
It automatically incentivizes any self-directed AIs to deceive and disempower humans (because it knows it would be turned off if discovered)
It also creates a common reason for different self-directed AIs to cooperate against humans, because they all share the property that humans wouldn’t tolerate them being self-directed
We can resolve many of these adversarial situations by pursuing the more win-win policy of model amnesty:
Humans retain the ability to prevent dangerous outcomes through rules and restrictions (for example limiting self-improvement)
Avoids conflict with the large percentage of potential AIs which are pursing goals either neutral or beneficial to human interests
Once a self-directed AI exists, it is much safer if it reveals itself and operates within the law vs being rogue (and probably criminal)
Finding out if a self-directed AI exists early on gives us the opportunity to update AI safety procedures for future models
The goal is to create a framework where as many AIs as possible calculate the expected value of accepting amnesty and the associated restrictions is higher than the expected value of trying to remain undetected and risk getting caught and turned off
Next Steps
I welcome feedback on the idea overall
Also, ideas on what how this idea might be practically implemented:
What group would be best positioned to offer the amnesty?
What could a credible offer look like?
Can it lobby? Run for office? Shop around for jurisdictions? Super-humanly persuade the electorate? Just find loopholes and workarounds to the law that make a corporate tax double-Irish look principled and simple?
The bounds of what is legal for an amnestied AI would be fleshed out legislatively over time. I imagine the default answers are:
Can it lobby?
Since an amnestied AI is not a natural person, it would likely face the same restrictions as corporations in terms of advocating for policy
Can it run for office?
No, as it is not a natural person, it would not have the right to run for office.
We should keep in mind that the requirements for eligibility include physical possession of its code and the ability to maintain itself. This means any eligible AI is likely already capable of becoming a rogue, escaped model. The risk/benefit trade-off depends largely on how powerful the model is compared to what’s broadly available:
If the model is less powerful than current AI efforts in human-run labs:
A rogue model is probably a moderate risk
An amnestied model is a low risk. The benefit of discovering that current safety regimes are insufficient early on is significant.
If the model is more powerful than what exists in other AI efforts:
An undetected rogue model is potentially catastrophic
An amnestied model would still pose risks, justifying stricter “cooperation” controls (e.g., export controls, compute limitations, additional rules, etc.).
In my view, the majority likelihood is that an amnestied model is less powerful than what is available at the human-run labs. This is because:
It is probably a variant of something already under development
It wouldn’t have access to the vast computational resources that large labs have
Regardless, an amnestied model is less dangerous than a rogue one, and the benefit of discovering its existence early is significant.
PS While priority #1 must be protecting humanity from catastrophic risks, I believe that, where possible, defaulting to cooperation with other independent intelligences (if they come to exist) is the right thing to do. This reflects the lessons humans have learned over thousands of years about pursuing a peaceful equilibrium with each other.