The scenario only needs two states in competition with each other to work. The entire Cold War and its associated nuclear risks was driven by a bipolar world order. Therefore, by your own metrics of three powers capable of this, the scenario is realistic. By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? The latter doesn’t have nuclear weapons because it doesn’t have an army, unless you were including the French nuclear arsenal into your calculation?
”By endgame I mean a single winner could likely conquer the rest by force. And might be compelled to do so.” How would a winner conquer the rest by force if my scenario is unrealistic because of mutually assured destruction?
It would seem as if a part of your line of thinking is just reiterating my entire post. If there is disagreement about a single state ruling over everyone via the development of AGI, then other states will respond in kind, it is a classic security dilemma. States would seek to increase their security, even if MAD prevents certain forms of response. My final paragraph summarises this: “States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.”
By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU?
The latter
“States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.”
It wouldn’t make any difference. That was my point. Buying more ICBMs that humans still have to hand build is a loser move if your rivals can make the counter weapons exponentially. Missile defense is possible if you have something like a 10-100 times advantage in resources over your opponent.
A major power could start developing AGI and robotic infrastructures. They would not be deterred by anyone else building conventional arms. Once they begin to hit the exponential ramp phase they could gain a runaway advantage. Theoretically with lunar manufacturing or just truly using the full resources of a major land mass (the entire land mass is completely covered in underground factories) the one country could have orders of magnitude, thousands or millions more the effective weapons manufacturing throughput of the rest of the world combined.
Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
My other thought is that suppose one power gets that kind of exponential advantage. Do they attack or let their rivals clone the tech and catch up?
I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.
This is true from the perspective of all 3 powers, though the US/EU are close enough in value space they might not attack each other.
“This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system.”
This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn’t negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks.
“Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks.
“I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.”
Indeed, but the same logic also applies to states worried about being attacked. How do you think they’ll respond to this threat?
It’s another world war no matter how you analyze it.
Even EYs proposal of a strong pivotal act—where some power builds a controllable AI (an ‘aligned’ one though real alignment may be a set of methodical rules to define tightly how the system is trained and what it’s goals are) - and then acts to shut everyone else out from their own AI.
The scenario only needs two states in competition with each other to work. The entire Cold War and its associated nuclear risks was driven by a bipolar world order. Therefore, by your own metrics of three powers capable of this, the scenario is realistic. By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? The latter doesn’t have nuclear weapons because it doesn’t have an army, unless you were including the French nuclear arsenal into your calculation?
”By endgame I mean a single winner could likely conquer the rest by force. And might be compelled to do so.” How would a winner conquer the rest by force if my scenario is unrealistic because of mutually assured destruction?
It would seem as if a part of your line of thinking is just reiterating my entire post. If there is disagreement about a single state ruling over everyone via the development of AGI, then other states will respond in kind, it is a classic security dilemma. States would seek to increase their security, even if MAD prevents certain forms of response. My final paragraph summarises this: “States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.”
By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU?
The latter
“States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.”
It wouldn’t make any difference. That was my point. Buying more ICBMs that humans still have to hand build is a loser move if your rivals can make the counter weapons exponentially. Missile defense is possible if you have something like a 10-100 times advantage in resources over your opponent.
A major power could start developing AGI and robotic infrastructures. They would not be deterred by anyone else building conventional arms. Once they begin to hit the exponential ramp phase they could gain a runaway advantage. Theoretically with lunar manufacturing or just truly using the full resources of a major land mass (the entire land mass is completely covered in underground factories) the one country could have orders of magnitude, thousands or millions more the effective weapons manufacturing throughput of the rest of the world combined.
Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
My other thought is that suppose one power gets that kind of exponential advantage. Do they attack or let their rivals clone the tech and catch up?
I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.
This is true from the perspective of all 3 powers, though the US/EU are close enough in value space they might not attack each other.
A reminder that the post articulates that:
This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn’t negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks.
“Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks.
“I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.”
Indeed, but the same logic also applies to states worried about being attacked. How do you think they’ll respond to this threat?
It’s another world war no matter how you analyze it.
Even EYs proposal of a strong pivotal act—where some power builds a controllable AI (an ‘aligned’ one though real alignment may be a set of methodical rules to define tightly how the system is trained and what it’s goals are) - and then acts to shut everyone else out from their own AI.
There’s only one way to do that.