violence via airstrikes … almost certainly never going to be accepted by the major powers
They can accept it if they are not the ones who get bombed.
I point to the NNPT as precedent. There is one rule for the nuclear weapons states, and another rule for everyone else. The nuclear weapons states get to keep their nukes, everyone else agrees not to develop them.
In this case it’s a little different, because the premise is that AGI is safe for no one. But it can work like this. Let’s suppose that as with the NNPT, it’s the five permanent members of the UN Security Council who are the privileged states. Then the distinction is between how the five enforce the AGI ban among each other, and how they enforce it among everyone else. Among each other, they can be all collegial and understanding of each other’s interests. For everyone else, diplomacy is given a chance, but there is much less patience for wilful evaders and violators of the ban.
Non-signatories to the NPT (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.
Yes, the analogy is imperfect. An anti-AGI treaty with the absoluteness that Eliezer describes, would treat the creation of AGI not just as an increase in danger that needs to be deterred, but as a tipping point that must never be allowed to happen in the first place. And that could lead to military intervention in a specific case, if lesser interventions (diplomacy, sabotage) failed to work.
Whether such military intervention—a last resort—would satisfy international law or not, depends on the details. If all the great powers supported such a treaty, and if e.g. the process of its application was supervised by the Security Council, I think it would necessarily be legal.
On the other hand, if tomorrow some state on its own attacked the AI infrastructure of another state, on the grounds that the second state is endangering humanity… I’m sure lawyers could be found to argue that it was a lawful act under some principle or statute; but their arguments might meet resistance.
The main thing I am arguing is that a global anti-AI regime does not inherently require nuclear brinkmanship or sovereign acts of war.
They can accept it if they are not the ones who get bombed.
I point to the NNPT as precedent. There is one rule for the nuclear weapons states, and another rule for everyone else. The nuclear weapons states get to keep their nukes, everyone else agrees not to develop them.
In this case it’s a little different, because the premise is that AGI is safe for no one. But it can work like this. Let’s suppose that as with the NNPT, it’s the five permanent members of the UN Security Council who are the privileged states. Then the distinction is between how the five enforce the AGI ban among each other, and how they enforce it among everyone else. Among each other, they can be all collegial and understanding of each other’s interests. For everyone else, diplomacy is given a chance, but there is much less patience for wilful evaders and violators of the ban.
Non-signatories to the NPT (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.
Yes, the analogy is imperfect. An anti-AGI treaty with the absoluteness that Eliezer describes, would treat the creation of AGI not just as an increase in danger that needs to be deterred, but as a tipping point that must never be allowed to happen in the first place. And that could lead to military intervention in a specific case, if lesser interventions (diplomacy, sabotage) failed to work.
Whether such military intervention—a last resort—would satisfy international law or not, depends on the details. If all the great powers supported such a treaty, and if e.g. the process of its application was supervised by the Security Council, I think it would necessarily be legal.
On the other hand, if tomorrow some state on its own attacked the AI infrastructure of another state, on the grounds that the second state is endangering humanity… I’m sure lawyers could be found to argue that it was a lawful act under some principle or statute; but their arguments might meet resistance.
The main thing I am arguing is that a global anti-AI regime does not inherently require nuclear brinkmanship or sovereign acts of war.