The problem is Eliezer’s article explicitly calls for violence via airstrikes, which is to be a little blunt, almost certainly never going to be accepted by the major powers, and that’s a huge amount of negative PR that AI companies can use against the AI doom people.
And even if it was accepted, I’d still wouldn’t agree with the article, due to multiple reasons, but the article by Eliezer is basically giving free ammunition to AI companies to show why AI doom is bad.
violence via airstrikes … almost certainly never going to be accepted by the major powers
They can accept it if they are not the ones who get bombed.
I point to the NNPT as precedent. There is one rule for the nuclear weapons states, and another rule for everyone else. The nuclear weapons states get to keep their nukes, everyone else agrees not to develop them.
In this case it’s a little different, because the premise is that AGI is safe for no one. But it can work like this. Let’s suppose that as with the NNPT, it’s the five permanent members of the UN Security Council who are the privileged states. Then the distinction is between how the five enforce the AGI ban among each other, and how they enforce it among everyone else. Among each other, they can be all collegial and understanding of each other’s interests. For everyone else, diplomacy is given a chance, but there is much less patience for wilful evaders and violators of the ban.
Non-signatories to the NPT (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.
Yes, the analogy is imperfect. An anti-AGI treaty with the absoluteness that Eliezer describes, would treat the creation of AGI not just as an increase in danger that needs to be deterred, but as a tipping point that must never be allowed to happen in the first place. And that could lead to military intervention in a specific case, if lesser interventions (diplomacy, sabotage) failed to work.
Whether such military intervention—a last resort—would satisfy international law or not, depends on the details. If all the great powers supported such a treaty, and if e.g. the process of its application was supervised by the Security Council, I think it would necessarily be legal.
On the other hand, if tomorrow some state on its own attacked the AI infrastructure of another state, on the grounds that the second state is endangering humanity… I’m sure lawyers could be found to argue that it was a lawful act under some principle or statute; but their arguments might meet resistance.
The main thing I am arguing is that a global anti-AI regime does not inherently require nuclear brinkmanship or sovereign acts of war.
The other issue is that falls out of this is that if anyone does successfully defect in secret, while every other power honors the ban, they get an insurmountable advantage. Self replicating factories, buried deep underground or under the ocean could give the side that does this a certain victory and control of the planet. Nukes won’t be enough etc, you can’t deal with an exponential problem with a linear amount of weapons. (once there are more self replicating nodes than the number of nukes on the planet, victory for the side with the AGI is probably certain. Conventional military would not be able to deal with swarm attacks, perfect aim and inter machine coordination and so on)
Victory comes for whichever side has control/monitoring over the majority of the planet, including the oceans, first. I don’t know of any technology that can achieve this that doesn’t first require you to have a controllable network of systems with the capabilities of AGI/ASI.
Controlling the GPUs isn’t enough, it’s too easy to build similar devices. Economics has concentrated most of the fabs in TSMC because it’s more efficient, but in a world where we know AGI is possible and GPU/TPUs are a strategic advantage, you would expect every world power to start building it’s own.
Yes, if diplomacy fails and it does come to an uncontrollable state and chaos/dangerous-hidden-compute/violence is the likely outcome… In the long run the exponential wins, but existing military power starts with an advantage. So a decisive early strike can abort early-stage runaway RSI-ing AGI. I really hope it won’t come to that. I really hope that more of a ‘worldwide monitoring and policing’ action will be adequate to prevent defection.
The problem is Eliezer’s article explicitly calls for violence via airstrikes, which is to be a little blunt, almost certainly never going to be accepted by the major powers, and that’s a huge amount of negative PR that AI companies can use against the AI doom people.
And even if it was accepted, I’d still wouldn’t agree with the article, due to multiple reasons, but the article by Eliezer is basically giving free ammunition to AI companies to show why AI doom is bad.
They can accept it if they are not the ones who get bombed.
I point to the NNPT as precedent. There is one rule for the nuclear weapons states, and another rule for everyone else. The nuclear weapons states get to keep their nukes, everyone else agrees not to develop them.
In this case it’s a little different, because the premise is that AGI is safe for no one. But it can work like this. Let’s suppose that as with the NNPT, it’s the five permanent members of the UN Security Council who are the privileged states. Then the distinction is between how the five enforce the AGI ban among each other, and how they enforce it among everyone else. Among each other, they can be all collegial and understanding of each other’s interests. For everyone else, diplomacy is given a chance, but there is much less patience for wilful evaders and violators of the ban.
Non-signatories to the NPT (Israel, India, Pakistan), were able to and did develop nuclear weapons without being subject to military action. By contrast (and very much contrary to international law) Yudkowsky proposes that non-signatories to his treaty be subject to bombardment.
Yes, the analogy is imperfect. An anti-AGI treaty with the absoluteness that Eliezer describes, would treat the creation of AGI not just as an increase in danger that needs to be deterred, but as a tipping point that must never be allowed to happen in the first place. And that could lead to military intervention in a specific case, if lesser interventions (diplomacy, sabotage) failed to work.
Whether such military intervention—a last resort—would satisfy international law or not, depends on the details. If all the great powers supported such a treaty, and if e.g. the process of its application was supervised by the Security Council, I think it would necessarily be legal.
On the other hand, if tomorrow some state on its own attacked the AI infrastructure of another state, on the grounds that the second state is endangering humanity… I’m sure lawyers could be found to argue that it was a lawful act under some principle or statute; but their arguments might meet resistance.
The main thing I am arguing is that a global anti-AI regime does not inherently require nuclear brinkmanship or sovereign acts of war.
The other issue is that falls out of this is that if anyone does successfully defect in secret, while every other power honors the ban, they get an insurmountable advantage. Self replicating factories, buried deep underground or under the ocean could give the side that does this a certain victory and control of the planet. Nukes won’t be enough etc, you can’t deal with an exponential problem with a linear amount of weapons. (once there are more self replicating nodes than the number of nukes on the planet, victory for the side with the AGI is probably certain. Conventional military would not be able to deal with swarm attacks, perfect aim and inter machine coordination and so on)
Victory comes for whichever side has control/monitoring over the majority of the planet, including the oceans, first. I don’t know of any technology that can achieve this that doesn’t first require you to have a controllable network of systems with the capabilities of AGI/ASI.
Controlling the GPUs isn’t enough, it’s too easy to build similar devices. Economics has concentrated most of the fabs in TSMC because it’s more efficient, but in a world where we know AGI is possible and GPU/TPUs are a strategic advantage, you would expect every world power to start building it’s own.
Yes, if diplomacy fails and it does come to an uncontrollable state and chaos/dangerous-hidden-compute/violence is the likely outcome… In the long run the exponential wins, but existing military power starts with an advantage. So a decisive early strike can abort early-stage runaway RSI-ing AGI. I really hope it won’t come to that. I really hope that more of a ‘worldwide monitoring and policing’ action will be adequate to prevent defection.
Currently, the US military has good enough satellite monitoring of the world to detect most large scale engineering projects like undersea datacenter construction. This isn’t easy, undersea datacenter construction is easier than it sounds… Example: https://news.microsoft.com/source/features/sustainability/project-natick-underwater-datacenter/