The closest thing to world domination that humans actually achieved is to be a dictator of a powerful empire. In other words, someone like Putin. Given that some people clearly can make it this far, what actually prevents them from scaling their power the remaining one or two orders of magnitude?
But before I start speculating on this, I realize that I actually do not even understand exactly how Putin got to his position. I mean, I have heard the general story but… if you put me in a simulator as a 30 years old Putin, with all the resources he had at given moment, I certainly would not be able to repeat his success. So my model of power is clearly incomplete.
I assume that in addition to all skills and resources it also involves a lot of luck. That the path to the top (of the country) seems obvious from hindsight, but at the beginning there were hundred other people with comparable resources and ambitions, who at some moment ended betting on the wrong side, or stabbed in the back. That if you would run 100 simulations starting with a 30 years old Putin, maybe even Putin himself would only win in one of them.
So the answer for a human-level AI trying to take over the world is that… if there is only one such AI, then statistically at some moment, something random will stop it. Like, no specific obstacle super difficult to overcome, but instead thousand obstacles where each has a 1% chance to stop it, and one of them does.
But build a thousand human-level AIs with the ambition to conquer the world, and maybe one of them will.
The closest thing to world domination that humans actually achieved is to be a dictator of a powerful empire. In other words, someone like Putin. Given that some people clearly can make it this far, what actually prevents them from scaling their power the remaining one or two orders of magnitude?
I think we can answer this one: domains of influence up to the scale of empires are mostly built by hundreds of years of social dynamics involving the actions of millions of other people. Rome wasn’t built in a day, nor was the Russian Federation. There is an inertia to any social organization that depends upon those who belong to it having belief in its continued existence, because human power almost entirely derives from other humans.
So the main impediment to world domination by humans at the moment is that there is no ready-made Dictator of Earth position to fill, and creating one is currently outside the scope of even the most powerful person’s ability to affect other people. Such a state could arise eventually, slowly incentivized through benefits of coordination over larger scales if nothing else.
With improved technology, including some in our own near future, the scope increases. There are many potential technologies that could substantially increase the scope of power even without AGI. With such AI the scope expands drastically: even if alignment is solved and AGI completely serves human will without greatly exceeding human capability, it means that some controlling entity can derive power without depending upon humans, which are expensive, slow to grow, resistant to change, and not really reliable.
I think this greatly increases the scope for world domination by a single entity, and could permanently end human civilization as we know it even if the Light-cone Dictator for Eternity actually started out as human.
Yes. If you want to achieve something big, you either need to get many details right, or rely on existing structures that already get the details right. Inventing all those details from scratch would be a lot of cognitive work, and something that seems correct might still turn wrong. Institutions already have the knowledge.
One of those problem is how to deal with unaligned humans. Building an army is not just about training people to obey orders and shoot, but also how to prevent soldiers from stealing the resources, defecting to enemy, or overthrowing you.
From this perspective, human-level AI could be powerful if you could make it 100% loyal, and then create multiple instances of it, because you would not need to solve the internal conflicts. For example, a robot army would not need to worry about rebellions, a robot company would not need to worry about employees leaving when a competitor offers them higher salary. If your robot production capacities are limited, you could put robots only to the critical positions. Not sure how exactly this would scale, how many humans would be as strong as a team of N loyal robots + M humans. Potentially the effect could be huge if e.g. replacing managers with robots removed the maze-like behavior.
The closest thing to world domination that humans actually achieved is to be a dictator of a powerful empire. In other words, someone like Putin. Given that some people clearly can make it this far, what actually prevents them from scaling their power the remaining one or two orders of magnitude?
But before I start speculating on this, I realize that I actually do not even understand exactly how Putin got to his position. I mean, I have heard the general story but… if you put me in a simulator as a 30 years old Putin, with all the resources he had at given moment, I certainly would not be able to repeat his success. So my model of power is clearly incomplete.
I assume that in addition to all skills and resources it also involves a lot of luck. That the path to the top (of the country) seems obvious from hindsight, but at the beginning there were hundred other people with comparable resources and ambitions, who at some moment ended betting on the wrong side, or stabbed in the back. That if you would run 100 simulations starting with a 30 years old Putin, maybe even Putin himself would only win in one of them.
So the answer for a human-level AI trying to take over the world is that… if there is only one such AI, then statistically at some moment, something random will stop it. Like, no specific obstacle super difficult to overcome, but instead thousand obstacles where each has a 1% chance to stop it, and one of them does.
But build a thousand human-level AIs with the ambition to conquer the world, and maybe one of them will.
I think we can answer this one: domains of influence up to the scale of empires are mostly built by hundreds of years of social dynamics involving the actions of millions of other people. Rome wasn’t built in a day, nor was the Russian Federation. There is an inertia to any social organization that depends upon those who belong to it having belief in its continued existence, because human power almost entirely derives from other humans.
So the main impediment to world domination by humans at the moment is that there is no ready-made Dictator of Earth position to fill, and creating one is currently outside the scope of even the most powerful person’s ability to affect other people. Such a state could arise eventually, slowly incentivized through benefits of coordination over larger scales if nothing else.
With improved technology, including some in our own near future, the scope increases. There are many potential technologies that could substantially increase the scope of power even without AGI. With such AI the scope expands drastically: even if alignment is solved and AGI completely serves human will without greatly exceeding human capability, it means that some controlling entity can derive power without depending upon humans, which are expensive, slow to grow, resistant to change, and not really reliable.
I think this greatly increases the scope for world domination by a single entity, and could permanently end human civilization as we know it even if the Light-cone Dictator for Eternity actually started out as human.
Yes. If you want to achieve something big, you either need to get many details right, or rely on existing structures that already get the details right. Inventing all those details from scratch would be a lot of cognitive work, and something that seems correct might still turn wrong. Institutions already have the knowledge.
One of those problem is how to deal with unaligned humans. Building an army is not just about training people to obey orders and shoot, but also how to prevent soldiers from stealing the resources, defecting to enemy, or overthrowing you.
From this perspective, human-level AI could be powerful if you could make it 100% loyal, and then create multiple instances of it, because you would not need to solve the internal conflicts. For example, a robot army would not need to worry about rebellions, a robot company would not need to worry about employees leaving when a competitor offers them higher salary. If your robot production capacities are limited, you could put robots only to the critical positions. Not sure how exactly this would scale, how many humans would be as strong as a team of N loyal robots + M humans. Potentially the effect could be huge if e.g. replacing managers with robots removed the maze-like behavior.