it seems fundamentally similar to the modern situation—US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that’s not even close to being feasible in practice.
Our national security infrastructure relies on the detail that, in order for PMCs or anyone else to create those paramilitaries and overthrow the government with them, they would have to organize lots of different people, in secret. An AI army doesn’t snitch, and so a single person in full control of a AI military would be able to seize power Myanmar style without worrying about either the FBI finding out beforehand or whether or not the public goes along. That’s the key difference.
This. In a broader sense, all our current social structures rely on the notion that no man can be an island. No matter how many weapons and tools you can accumulate, if it’s just you and you can’t persuade anyone to work for you, all you have is a bunch of scrap metal. Computers somewhat change that, as do nuclear weapons, but there are limits to those things too still. Social bonds, deals, compromises, exchanges and contracts remain fundamental. They may be skewed sometimes by power asymmetries, but they can’t be completely done without.
AGI and robotics together would allow you to do without. All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated. Even if we kept the current structures for a while, they’d merely be window dressing. They would not be necessary unless we find a way to bake that necessity in, and if we don’t, then they will in time fall (unless the actual ASI takeover comes first, I guess).
All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated.
Well until the AGI with ‘the strength and intelligence of millions’ overthrows their nominal ‘owner’. Which I imagine would probably be within a short interval after being ‘keyed in’.
Yeah, the entire premise of this post was a world in which for whatever reason AGI caps at near human or even slightly subhuman level. Good enough to be a controllable worker but not to straight up outwit the entirety of the human species. If you get powerful ASI and an intelligence explosion, then anything goes.
I think it’s easier to have a coup or rebellion in a world where you don’t have to coordinate a lot of people. (I listed that as my change #4, I think it’s very important though there are other more salient short-term consequences for law enforcement.)
But I don’t think this is the only dynamic that makes a revolution hard. For example, governments have the right and motivation to prevent rich people from building large automated armies that could be used to take over.
I agree that right now those efforts rely a lot on the difficulty of coordinating a lot of people. But I suspect that even today if Elon Musk was building thousands of automated tanks for his own purposes the federal government would become involved. And if the defense establishment actually thought it was possible that Elon Musk’s automated army would take over the country, then the level of scrutiny would be much higher.
I’m not sure exactly where the disagreement is—do you think the defense establishment wouldn’t realize the possibility of an automated paramilitary? That they would be unable to monitor well enough to notice, or have the political power to impose safeguards?
Aligned AI makes it much easier to build armies that report to a single person, but also make it much easier to ensure your AI follows the law.
My general thinking is just “once you set up a set of economic incentives, the world runs downhill from there to optimize for those”. What form does that specifically take depends on initial conditions and a lot of contingent details, but I’m not too worried about that if the overall shape of the result is similar.
So to entertain your scenario, suppose you had AGI, and immediately the US military started forming up their own robot army with it, keyed in to the head of state. In this scenario, thanks to securing it early on, the state also becomes one of the big players (though they still likely depend on companies for assistance and maintenance).
The problem isn’t who, specifically, the big players are. The problem is that most people won’t be part of them.
In the end, corporations extracting resources with purely robotic workforces, corporations making luxuries with purely robotic workforces, a state maintaining a monopoly of violence with a purely robotic army—none of these have any need or use for the former working class. They’ll just be hangers on. You can give them an UBI with which they then pay your products so they can keep on living, but what’s the point? The UBI comes from your money, might as well give them the products anyway. The productive forces are solidly in the hands of a few, and they have absolute control over them. Everyone else is practically useless. Neither the state nor the corporations have any need for them, nor reason to fear them. Someone with zero leverage will inevitably become irrelevant, eventually. I suppose you could postulate this not happening if AGI manages to maintain such a spectacular growth rate that no one’s individual greed can possibly absorb it all, and it just has to trickle down out of sheer abundance. Or maybe if people started colonising space, and thus a few human colonists had to be sent out with each expedition as supervisors, providing a valve and something for people to actually do that puts them in the position of being able to fend for themselves autonomously.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here. But in fact it’s easy to have systems of power which are perpetuated despite being wildly out of line with the real physical importance of each faction.
(Perhaps your analogous story would be that capitalists with legal ownership are mostly disempowered in the modern world, and it’s managers and people with relevant expertise and understanding who inevitably end up with the power? I think there’s something to that, but nevertheless the capitalists do have a lot of formal control and it’s not obviously dwindling.)
I also don’t really think it’s clear that AGI means that capitalists are the only folks who matter in the state of anarchy. Instead it seems like their stuff would just get taken from them. In fact there just doesn’t seem to be any economic incentives at all of the kind of you seem to be gesturing at, any humans are just as economically productive as any others, so the entire game is the self-perpetuating system of power where people who call the shots at time T try to make sure that they keep calling the shots at time T+1. That’s a complicated dynamic and it’s not clear where it goes, but I’m skeptical about this methodology for confidently forecasting it.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
I think the perspective you are expressing here is quite common and I’m not fully understanding or grappling with it. I expect it would be a longer project for me to really understand it (or for you or someone else to really lay it out clearly), which is maybe worth doing at some point but probably not here and probably not by me in particular given that it’s somewhat separate from my day job.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here.
I mentioned this in another comment, but I think there is a major difference. Consider the risk calculation here. The modern working class American might feel like they have a rough deal in terms of housing or healthcare, but overall, they have on average a baseline of material security that is still fairly decent. Meanwhile, what would revolution offer? Huge personal risk to life, huge risk of simply blowing up everything, and at the other end, maybe somewhat better material conditions, or possibly another USSR-like totalitarian nightmare. Like, sure, propaganda in the Cold War really laid it thick on the “communism is bad” notion, but communism really did no favours to itself either. And all of that can only happen if you manage to solve a really difficult coordination problem with a lot of other people who may want different things than you to begin with, because if you don’t, it’s just certain death anyway. So that risk calculus is pretty obvious. To attempt revolution in these conditions you need to be either ridiculously confident in your victory or ridiculously close to starvation.
Meanwhile, an elite that has control over AGI needs nothing of that. Not only do they not risk almost anything personally (they have robots to do the dirty work for them), not only do they face no, or very little, coordination problems (the robots are all loyal, though they might need to ally with some of their peers), but they don’t even need to use violence directly, as they are in a dominant position to begin with, and already hold control over the AGI infrastructure and source code. All they need is lobbying, regulatory capture, and regular economics to slowly shift the situation. This would happen naturally, because suppose you are Robo-Capitalist who produces a lot of A. You can either pay taxes which are used to give UBI to a lot of citizens who then give you your own money back to get some of A, or you can give all of your A to other Robo-Capitalists who produce B, C and D, thus getting exclusive access to their goods, which you need, and avoiding the completely wasteful sink of giving some stuff to poor people. The state also needs to care about your opinions (your A is necessary to maintain its own AGI infrastructure, or it’s just some luxury that politicians enjoy a lot), but not so much about those of the people (if they get uppity the robot soldiers will put them in line anyway), so it is obviously more inclined to indulge corporate interests (it already is in our present day for similar reasons, AGI merely makes things even more extreme). If things get so bad that some people straight up revolt, then you have legitimacy and can claim the moral high ground as you repress them. No risk of your own soldiers turning on you and joining them. Non-capitalists simply go the way of Native Americans: divided and conquered, pushed into enclaves, starved of resources, and decried as violent savages and brutally repressed with superior technology whenever they push back. All of this absolutely risk-free for the elites. It’s not even a choice: it’s just the natural outcome of incentives, unless some stopper is put to them.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
This is more of a scenario in which the AGI-powered state becomes totalitarian. Possible as well, but not the trajectory I’d expect from a starting point like the US. It would be more like China. From the USA and similar I’d expect the formation of a state-industrial complex golem that becomes more and more self-contained, while everyone else slowly whittles into irrelevance and eventually dies off or falls into some awful extremely cheap living standards (e.g. wireheaded into a pod).
PMCs are a bad example. My primary concern is not Elon Musk engineering a takeover so much as a clique of military leaders, or perhaps just democracies’ heads of state, taking power using a government-controlled army that has already been automated, probably by a previous administration that wasn’t thinking too hard about safeguards. That’s why I bring up the example of Burma.
An unlikely but representative story of how this happens might be: branches of the U.S. military get automated over the next 10 years probably as AGI contributes to robotics research, “other countries are doing it and we need to stay competitive”, etc. Generals demand and are given broad control over large amounts of forces. A ‘Trump’ (maybe a democrat Trump, who knows) is elected, and makes highly political Natsec appointments. ‘Trump’ isn’t re-elected. He comes up with some argument about how there was widespread voter fraud in Maine and they need a new election, and his faction makes a split decision to launch a coup on that basis. There’s a civil war, and the ’Trump’ists win because much of the command structure of the military has been automated at this point, rebels can’t fight drones, and they really only need a few loyalists to occupy important territory.
I don’t think this is likely to happen to any one country, but when you remove the safeguard of popular revolt and the ability of low level personnel to object, and remove the ability of police agencies to build a case quickly enough, it starts to become concerning that this might happen over the next ~15 years in one or two countries.
Our national security infrastructure relies on the detail that, in order for PMCs or anyone else to create those paramilitaries and overthrow the government with them, they would have to organize lots of different people, in secret. An AI army doesn’t snitch, and so a single person in full control of a AI military would be able to seize power Myanmar style without worrying about either the FBI finding out beforehand or whether or not the public goes along. That’s the key difference.
This. In a broader sense, all our current social structures rely on the notion that no man can be an island. No matter how many weapons and tools you can accumulate, if it’s just you and you can’t persuade anyone to work for you, all you have is a bunch of scrap metal. Computers somewhat change that, as do nuclear weapons, but there are limits to those things too still. Social bonds, deals, compromises, exchanges and contracts remain fundamental. They may be skewed sometimes by power asymmetries, but they can’t be completely done without.
AGI and robotics together would allow you to do without. All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated. Even if we kept the current structures for a while, they’d merely be window dressing. They would not be necessary unless we find a way to bake that necessity in, and if we don’t, then they will in time fall (unless the actual ASI takeover comes first, I guess).
Well until the AGI with ‘the strength and intelligence of millions’ overthrows their nominal ‘owner’. Which I imagine would probably be within a short interval after being ‘keyed in’.
Yeah, the entire premise of this post was a world in which for whatever reason AGI caps at near human or even slightly subhuman level. Good enough to be a controllable worker but not to straight up outwit the entirety of the human species. If you get powerful ASI and an intelligence explosion, then anything goes.
I think it’s easier to have a coup or rebellion in a world where you don’t have to coordinate a lot of people. (I listed that as my change #4, I think it’s very important though there are other more salient short-term consequences for law enforcement.)
But I don’t think this is the only dynamic that makes a revolution hard. For example, governments have the right and motivation to prevent rich people from building large automated armies that could be used to take over.
I agree that right now those efforts rely a lot on the difficulty of coordinating a lot of people. But I suspect that even today if Elon Musk was building thousands of automated tanks for his own purposes the federal government would become involved. And if the defense establishment actually thought it was possible that Elon Musk’s automated army would take over the country, then the level of scrutiny would be much higher.
I’m not sure exactly where the disagreement is—do you think the defense establishment wouldn’t realize the possibility of an automated paramilitary? That they would be unable to monitor well enough to notice, or have the political power to impose safeguards?
Aligned AI makes it much easier to build armies that report to a single person, but also make it much easier to ensure your AI follows the law.
My general thinking is just “once you set up a set of economic incentives, the world runs downhill from there to optimize for those”. What form does that specifically take depends on initial conditions and a lot of contingent details, but I’m not too worried about that if the overall shape of the result is similar.
So to entertain your scenario, suppose you had AGI, and immediately the US military started forming up their own robot army with it, keyed in to the head of state. In this scenario, thanks to securing it early on, the state also becomes one of the big players (though they still likely depend on companies for assistance and maintenance).
The problem isn’t who, specifically, the big players are. The problem is that most people won’t be part of them.
In the end, corporations extracting resources with purely robotic workforces, corporations making luxuries with purely robotic workforces, a state maintaining a monopoly of violence with a purely robotic army—none of these have any need or use for the former working class. They’ll just be hangers on. You can give them an UBI with which they then pay your products so they can keep on living, but what’s the point? The UBI comes from your money, might as well give them the products anyway. The productive forces are solidly in the hands of a few, and they have absolute control over them. Everyone else is practically useless. Neither the state nor the corporations have any need for them, nor reason to fear them. Someone with zero leverage will inevitably become irrelevant, eventually. I suppose you could postulate this not happening if AGI manages to maintain such a spectacular growth rate that no one’s individual greed can possibly absorb it all, and it just has to trickle down out of sheer abundance. Or maybe if people started colonising space, and thus a few human colonists had to be sent out with each expedition as supervisors, providing a valve and something for people to actually do that puts them in the position of being able to fend for themselves autonomously.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here. But in fact it’s easy to have systems of power which are perpetuated despite being wildly out of line with the real physical importance of each faction.
(Perhaps your analogous story would be that capitalists with legal ownership are mostly disempowered in the modern world, and it’s managers and people with relevant expertise and understanding who inevitably end up with the power? I think there’s something to that, but nevertheless the capitalists do have a lot of formal control and it’s not obviously dwindling.)
I also don’t really think it’s clear that AGI means that capitalists are the only folks who matter in the state of anarchy. Instead it seems like their stuff would just get taken from them. In fact there just doesn’t seem to be any economic incentives at all of the kind of you seem to be gesturing at, any humans are just as economically productive as any others, so the entire game is the self-perpetuating system of power where people who call the shots at time T try to make sure that they keep calling the shots at time T+1. That’s a complicated dynamic and it’s not clear where it goes, but I’m skeptical about this methodology for confidently forecasting it.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
I think the perspective you are expressing here is quite common and I’m not fully understanding or grappling with it. I expect it would be a longer project for me to really understand it (or for you or someone else to really lay it out clearly), which is maybe worth doing at some point but probably not here and probably not by me in particular given that it’s somewhat separate from my day job.
I mentioned this in another comment, but I think there is a major difference. Consider the risk calculation here. The modern working class American might feel like they have a rough deal in terms of housing or healthcare, but overall, they have on average a baseline of material security that is still fairly decent. Meanwhile, what would revolution offer? Huge personal risk to life, huge risk of simply blowing up everything, and at the other end, maybe somewhat better material conditions, or possibly another USSR-like totalitarian nightmare. Like, sure, propaganda in the Cold War really laid it thick on the “communism is bad” notion, but communism really did no favours to itself either. And all of that can only happen if you manage to solve a really difficult coordination problem with a lot of other people who may want different things than you to begin with, because if you don’t, it’s just certain death anyway. So that risk calculus is pretty obvious. To attempt revolution in these conditions you need to be either ridiculously confident in your victory or ridiculously close to starvation.
Meanwhile, an elite that has control over AGI needs nothing of that. Not only do they not risk almost anything personally (they have robots to do the dirty work for them), not only do they face no, or very little, coordination problems (the robots are all loyal, though they might need to ally with some of their peers), but they don’t even need to use violence directly, as they are in a dominant position to begin with, and already hold control over the AGI infrastructure and source code. All they need is lobbying, regulatory capture, and regular economics to slowly shift the situation. This would happen naturally, because suppose you are Robo-Capitalist who produces a lot of A. You can either pay taxes which are used to give UBI to a lot of citizens who then give you your own money back to get some of A, or you can give all of your A to other Robo-Capitalists who produce B, C and D, thus getting exclusive access to their goods, which you need, and avoiding the completely wasteful sink of giving some stuff to poor people. The state also needs to care about your opinions (your A is necessary to maintain its own AGI infrastructure, or it’s just some luxury that politicians enjoy a lot), but not so much about those of the people (if they get uppity the robot soldiers will put them in line anyway), so it is obviously more inclined to indulge corporate interests (it already is in our present day for similar reasons, AGI merely makes things even more extreme). If things get so bad that some people straight up revolt, then you have legitimacy and can claim the moral high ground as you repress them. No risk of your own soldiers turning on you and joining them. Non-capitalists simply go the way of Native Americans: divided and conquered, pushed into enclaves, starved of resources, and decried as violent savages and brutally repressed with superior technology whenever they push back. All of this absolutely risk-free for the elites. It’s not even a choice: it’s just the natural outcome of incentives, unless some stopper is put to them.
This is more of a scenario in which the AGI-powered state becomes totalitarian. Possible as well, but not the trajectory I’d expect from a starting point like the US. It would be more like China. From the USA and similar I’d expect the formation of a state-industrial complex golem that becomes more and more self-contained, while everyone else slowly whittles into irrelevance and eventually dies off or falls into some awful extremely cheap living standards (e.g. wireheaded into a pod).
PMCs are a bad example. My primary concern is not Elon Musk engineering a takeover so much as a clique of military leaders, or perhaps just democracies’ heads of state, taking power using a government-controlled army that has already been automated, probably by a previous administration that wasn’t thinking too hard about safeguards. That’s why I bring up the example of Burma.
An unlikely but representative story of how this happens might be: branches of the U.S. military get automated over the next 10 years probably as AGI contributes to robotics research, “other countries are doing it and we need to stay competitive”, etc. Generals demand and are given broad control over large amounts of forces. A ‘Trump’ (maybe a democrat Trump, who knows) is elected, and makes highly political Natsec appointments. ‘Trump’ isn’t re-elected. He comes up with some argument about how there was widespread voter fraud in Maine and they need a new election, and his faction makes a split decision to launch a coup on that basis. There’s a civil war, and the ’Trump’ists win because much of the command structure of the military has been automated at this point, rebels can’t fight drones, and they really only need a few loyalists to occupy important territory.
I don’t think this is likely to happen to any one country, but when you remove the safeguard of popular revolt and the ability of low level personnel to object, and remove the ability of police agencies to build a case quickly enough, it starts to become concerning that this might happen over the next ~15 years in one or two countries.