“This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system.”
This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn’t negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks.
“Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks.
“I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.”
Indeed, but the same logic also applies to states worried about being attacked. How do you think they’ll respond to this threat?
It’s another world war no matter how you analyze it.
Even EYs proposal of a strong pivotal act—where some power builds a controllable AI (an ‘aligned’ one though real alignment may be a set of methodical rules to define tightly how the system is trained and what it’s goals are) - and then acts to shut everyone else out from their own AI.
A reminder that the post articulates that:
This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn’t negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks.
“Conquering the planet is a relatively simple problem if you have exponential resources. You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.
This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks.
“I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack. By letting the rivals catch up you are choosing to let their values possibly control the future. And their values include a bunch of things you don’t like.”
Indeed, but the same logic also applies to states worried about being attacked. How do you think they’ll respond to this threat?
It’s another world war no matter how you analyze it.
Even EYs proposal of a strong pivotal act—where some power builds a controllable AI (an ‘aligned’ one though real alignment may be a set of methodical rules to define tightly how the system is trained and what it’s goals are) - and then acts to shut everyone else out from their own AI.
There’s only one way to do that.