In the long run it seems pretty clear labor won’t have any real economic value. It seems like the easiest way for everyone to benefit is for states to either own capital themselves or tax capital, and use the proceeds to benefit citizens. (You could also have sufficiently broad capital ownership, but that seems like a heavier lift from here.)
I’m not sure why you call this “relying on the benevolence of the butcher.” Typically states collect taxes using the threat of force, not by relying on companies to be benevolent. (If states own capital then they aren’t even using the threat of force.)
Maybe you mean the citizens are relying on the benevolence of the state? But in a democracy they do retain formal power via voting, which is not really benevolence. Governance is harder in a world without revolution or coups as a release valve, but I’m not sure it’s qualitatively different from the modern situation. In some theoretical sense the US military could say “screw the voters” and just kill them and take their stuff, and that would indeed become easier if a bunch of humans in the military didn’t have to go along with the plan. But it seems like the core issue here is transferring currently-implicit checks and balances to a world with aligned AI. I don’t think this requires crippling the tech at all, just being careful about checks and balances so that an army which nominally works on behalf of the voters actually does so.
Maybe you mean that companies that make AI systems and robots could in aggregate just overthrow the government rather than pay taxes? (That sounds like what you mean by “leave everyone else to fend for themselves,” though presumably they also have to steal or own all the natural resources or else the rest of the world would just build AGI later, so I am thinking of this more as a violent takeover rather than peaceful secession.) That’s true in some sense, but it seems fundamentally similar to the modern situation—US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that’s not even close to being feasible in practice. Most of the fundamental dynamics that prevent strong paramilitaries today seem like they apply just as well. There are plenty of mechanisms other than “cryptographic veto” by which we can try to build a military that is effectively controlled by the civilian government.
It seems to me like there are interesting challenges in the world with AI:
The current way we tax capital gains is both highly inefficient and unlikely to generate much revenue. I think there are much better options, but tax policy seems unlikely to change enough to handle AI until it becomes absolutely necessary. If we fail to solve this then median incomes could fall very far below average income.
Right now the possibility of revolutions or coups seems like an important sanity check on political systems, and involving lots of humans is an important part of how we do checks and balances. Aligned AI would greatly increase the importance of formal chains of command and formal systems of governance, which might require more robust formal checks and balances.
It’s qualitatively harder for militaries to verify AI products than physical weapons. Absent alignment this seems like a dealbreaker, since militaries can’t use AI without a risk of coup, but even with alignment it is a challenging institutional problem.
Part of how we prevent violent revolutions is that they require a lot of humans to break the law, and it may be easier to coordinate large-scale law-breaking with AI. This seems like a law enforcement problem that we will need to confront for a variety of reasons.
I don’t think it’s right to think of this as “no paths out of the trap;” more like “there are a lot of ways society would need to adapt to AI in order to achieve outcomes that would be broadly considered desirable.”
it seems fundamentally similar to the modern situation—US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that’s not even close to being feasible in practice.
Our national security infrastructure relies on the detail that, in order for PMCs or anyone else to create those paramilitaries and overthrow the government with them, they would have to organize lots of different people, in secret. An AI army doesn’t snitch, and so a single person in full control of a AI military would be able to seize power Myanmar style without worrying about either the FBI finding out beforehand or whether or not the public goes along. That’s the key difference.
This. In a broader sense, all our current social structures rely on the notion that no man can be an island. No matter how many weapons and tools you can accumulate, if it’s just you and you can’t persuade anyone to work for you, all you have is a bunch of scrap metal. Computers somewhat change that, as do nuclear weapons, but there are limits to those things too still. Social bonds, deals, compromises, exchanges and contracts remain fundamental. They may be skewed sometimes by power asymmetries, but they can’t be completely done without.
AGI and robotics together would allow you to do without. All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated. Even if we kept the current structures for a while, they’d merely be window dressing. They would not be necessary unless we find a way to bake that necessity in, and if we don’t, then they will in time fall (unless the actual ASI takeover comes first, I guess).
All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated.
Well until the AGI with ‘the strength and intelligence of millions’ overthrows their nominal ‘owner’. Which I imagine would probably be within a short interval after being ‘keyed in’.
Yeah, the entire premise of this post was a world in which for whatever reason AGI caps at near human or even slightly subhuman level. Good enough to be a controllable worker but not to straight up outwit the entirety of the human species. If you get powerful ASI and an intelligence explosion, then anything goes.
I think it’s easier to have a coup or rebellion in a world where you don’t have to coordinate a lot of people. (I listed that as my change #4, I think it’s very important though there are other more salient short-term consequences for law enforcement.)
But I don’t think this is the only dynamic that makes a revolution hard. For example, governments have the right and motivation to prevent rich people from building large automated armies that could be used to take over.
I agree that right now those efforts rely a lot on the difficulty of coordinating a lot of people. But I suspect that even today if Elon Musk was building thousands of automated tanks for his own purposes the federal government would become involved. And if the defense establishment actually thought it was possible that Elon Musk’s automated army would take over the country, then the level of scrutiny would be much higher.
I’m not sure exactly where the disagreement is—do you think the defense establishment wouldn’t realize the possibility of an automated paramilitary? That they would be unable to monitor well enough to notice, or have the political power to impose safeguards?
Aligned AI makes it much easier to build armies that report to a single person, but also make it much easier to ensure your AI follows the law.
My general thinking is just “once you set up a set of economic incentives, the world runs downhill from there to optimize for those”. What form does that specifically take depends on initial conditions and a lot of contingent details, but I’m not too worried about that if the overall shape of the result is similar.
So to entertain your scenario, suppose you had AGI, and immediately the US military started forming up their own robot army with it, keyed in to the head of state. In this scenario, thanks to securing it early on, the state also becomes one of the big players (though they still likely depend on companies for assistance and maintenance).
The problem isn’t who, specifically, the big players are. The problem is that most people won’t be part of them.
In the end, corporations extracting resources with purely robotic workforces, corporations making luxuries with purely robotic workforces, a state maintaining a monopoly of violence with a purely robotic army—none of these have any need or use for the former working class. They’ll just be hangers on. You can give them an UBI with which they then pay your products so they can keep on living, but what’s the point? The UBI comes from your money, might as well give them the products anyway. The productive forces are solidly in the hands of a few, and they have absolute control over them. Everyone else is practically useless. Neither the state nor the corporations have any need for them, nor reason to fear them. Someone with zero leverage will inevitably become irrelevant, eventually. I suppose you could postulate this not happening if AGI manages to maintain such a spectacular growth rate that no one’s individual greed can possibly absorb it all, and it just has to trickle down out of sheer abundance. Or maybe if people started colonising space, and thus a few human colonists had to be sent out with each expedition as supervisors, providing a valve and something for people to actually do that puts them in the position of being able to fend for themselves autonomously.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here. But in fact it’s easy to have systems of power which are perpetuated despite being wildly out of line with the real physical importance of each faction.
(Perhaps your analogous story would be that capitalists with legal ownership are mostly disempowered in the modern world, and it’s managers and people with relevant expertise and understanding who inevitably end up with the power? I think there’s something to that, but nevertheless the capitalists do have a lot of formal control and it’s not obviously dwindling.)
I also don’t really think it’s clear that AGI means that capitalists are the only folks who matter in the state of anarchy. Instead it seems like their stuff would just get taken from them. In fact there just doesn’t seem to be any economic incentives at all of the kind of you seem to be gesturing at, any humans are just as economically productive as any others, so the entire game is the self-perpetuating system of power where people who call the shots at time T try to make sure that they keep calling the shots at time T+1. That’s a complicated dynamic and it’s not clear where it goes, but I’m skeptical about this methodology for confidently forecasting it.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
I think the perspective you are expressing here is quite common and I’m not fully understanding or grappling with it. I expect it would be a longer project for me to really understand it (or for you or someone else to really lay it out clearly), which is maybe worth doing at some point but probably not here and probably not by me in particular given that it’s somewhat separate from my day job.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here.
I mentioned this in another comment, but I think there is a major difference. Consider the risk calculation here. The modern working class American might feel like they have a rough deal in terms of housing or healthcare, but overall, they have on average a baseline of material security that is still fairly decent. Meanwhile, what would revolution offer? Huge personal risk to life, huge risk of simply blowing up everything, and at the other end, maybe somewhat better material conditions, or possibly another USSR-like totalitarian nightmare. Like, sure, propaganda in the Cold War really laid it thick on the “communism is bad” notion, but communism really did no favours to itself either. And all of that can only happen if you manage to solve a really difficult coordination problem with a lot of other people who may want different things than you to begin with, because if you don’t, it’s just certain death anyway. So that risk calculus is pretty obvious. To attempt revolution in these conditions you need to be either ridiculously confident in your victory or ridiculously close to starvation.
Meanwhile, an elite that has control over AGI needs nothing of that. Not only do they not risk almost anything personally (they have robots to do the dirty work for them), not only do they face no, or very little, coordination problems (the robots are all loyal, though they might need to ally with some of their peers), but they don’t even need to use violence directly, as they are in a dominant position to begin with, and already hold control over the AGI infrastructure and source code. All they need is lobbying, regulatory capture, and regular economics to slowly shift the situation. This would happen naturally, because suppose you are Robo-Capitalist who produces a lot of A. You can either pay taxes which are used to give UBI to a lot of citizens who then give you your own money back to get some of A, or you can give all of your A to other Robo-Capitalists who produce B, C and D, thus getting exclusive access to their goods, which you need, and avoiding the completely wasteful sink of giving some stuff to poor people. The state also needs to care about your opinions (your A is necessary to maintain its own AGI infrastructure, or it’s just some luxury that politicians enjoy a lot), but not so much about those of the people (if they get uppity the robot soldiers will put them in line anyway), so it is obviously more inclined to indulge corporate interests (it already is in our present day for similar reasons, AGI merely makes things even more extreme). If things get so bad that some people straight up revolt, then you have legitimacy and can claim the moral high ground as you repress them. No risk of your own soldiers turning on you and joining them. Non-capitalists simply go the way of Native Americans: divided and conquered, pushed into enclaves, starved of resources, and decried as violent savages and brutally repressed with superior technology whenever they push back. All of this absolutely risk-free for the elites. It’s not even a choice: it’s just the natural outcome of incentives, unless some stopper is put to them.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
This is more of a scenario in which the AGI-powered state becomes totalitarian. Possible as well, but not the trajectory I’d expect from a starting point like the US. It would be more like China. From the USA and similar I’d expect the formation of a state-industrial complex golem that becomes more and more self-contained, while everyone else slowly whittles into irrelevance and eventually dies off or falls into some awful extremely cheap living standards (e.g. wireheaded into a pod).
PMCs are a bad example. My primary concern is not Elon Musk engineering a takeover so much as a clique of military leaders, or perhaps just democracies’ heads of state, taking power using a government-controlled army that has already been automated, probably by a previous administration that wasn’t thinking too hard about safeguards. That’s why I bring up the example of Burma.
An unlikely but representative story of how this happens might be: branches of the U.S. military get automated over the next 10 years probably as AGI contributes to robotics research, “other countries are doing it and we need to stay competitive”, etc. Generals demand and are given broad control over large amounts of forces. A ‘Trump’ (maybe a democrat Trump, who knows) is elected, and makes highly political Natsec appointments. ‘Trump’ isn’t re-elected. He comes up with some argument about how there was widespread voter fraud in Maine and they need a new election, and his faction makes a split decision to launch a coup on that basis. There’s a civil war, and the ’Trump’ists win because much of the command structure of the military has been automated at this point, rebels can’t fight drones, and they really only need a few loyalists to occupy important territory.
I don’t think this is likely to happen to any one country, but when you remove the safeguard of popular revolt and the ability of low level personnel to object, and remove the ability of police agencies to build a case quickly enough, it starts to become concerning that this might happen over the next ~15 years in one or two countries.
Maybe you mean that companies that make AI systems and robots could in aggregate just overthrow the government rather than pay taxes?
Something along those lines, but honestly I’d expect it to happen more gradually. My problem is that right now, the current situation rests on the fact that everyone involved needs everyone else, to a point. We’ve arrived at this arrangement through a lot of turbulent history and conflict. Ultimately, for example, a state can’t just… kill the vast majority of its population. It would collapse. That creates a need for even the worst tyrannies to somewhat balance their excesses, if they’re not going completely insane and essentially committing suicide as a polity (this does sometimes happen). Similarly, companies can only get away with so much mistreatment of workers or pollution before either competition, boycotts, or the long arm of the law (backed by politicians who need their constituents’ votes) get them.
But all this balance is the product of an equilibrium of mutual need. Remove the need, the institutions might survive—for a while, out of inertia. But I don’t think it would be a stable situation. Gradually you’d have everyone realise how they can get away with stuff that they couldn’t get away with before and now suffer no consequences for it, or be able to ignore the consequences.
Similarly, there’s no real reason a king ought to have power. The people could just not listen to him, or execute him. And yet...
If you want to describe a monarch as “relying on the benevolence of the butcher” then I guess sure, I see what you mean. But I’m not yet convinced that this is a helpful frame on how power works or a good way to make forecasts.
A democracy, even with zero value for labor, seems much more stable than historical monarchies or dictatorships. There are fewer plausibly legitimate challengers (and less room for a revolt), there is a better mechanism for handling succession disputes. AI also seems likely to generally increases the stability of formal governance (one of the big things people complain about!).
Another way of putting it is that capitalists are also relying on the benevolence of the butcher, at least in the world of today. Their capital doesn’t physically empower them, 99.9% of what they have is title and the expectation that law enforcement will settle disputes in their favor (and that they can pay security, who again has no real reason to listen to them beyond the reason they would listen to a king). Aligned AI systems may increase the importance of formal power, since you can build machines that reliably do what their designer intended rather than relying on humans to do what they said they’d do. But I don’t think that asymmetrically favors the capitalist (who has on-paper control of their assets) over the government (who has on paper control of the military and the power to tax).
Similarly, there’s no real reason a king ought to have power. The people could just not listen to him, or execute him. And yet...
Feudal systems were built on trust. The King had legitimacy with his Lords, who held him as a shared point of reference, someone who would mediate and maintain balance between them. The King had to earn and keep that trust. Kings were ousted or executed when they betrayed that trust. Like, all the time. First that come to my mind would be John Lackland, Charles I, and obviously, most famously, Louis XVI. Feudalism pretty much crumbled once material conditions made it so that it wasn’t necessary nor functional any more, and with it went most kings, or they had to find ways to survive in the new order by changing their role into that of figureheads.
I’m saying building AGI would make the current capitalist democracy obsolete the way industrialization and firearms made feudalism obsolete, and I’m saying the system afterwards wouldn’t be as nice as what we have now.
Another way of putting it is that capitalists are also relying on the benevolence of the butcher, at least in the world of today. Their capital doesn’t physically empower them, 99.9% of what they have is title and the expectation that law enforcement will settle disputes in their favor (and that they can pay security, who again has no real reason to listen to them beyond the reason they would listen to a king).
I think again, the problem here is a balance of risks and trust. No one wants to rock the boat too much, even if rocking the boat might end up benefitting them, because it might also not. It’s why most anti-capitalists who keep pining for a popular revolution are kind of deluding themselves: people wouldn’t just risk their lives while having relative material security for the sake of a possible improvement in their conditions that might also actually just turn out to be a totalitarian nightmare instead. It’s a stupid bet no one would take. Changing conditions would change the risks, and thus the optimal choice. States wouldn’t go against corporations, and corporations wouldn’t go against states, if both are mutually dependent from each other. But both would absolutely screw over the common people completely if they had absolutely nothing to fear or lose from it, which is something that AGI could really cement.
I think if you want to argue that this is a trap with no obvious way out, such that utopian visions are wishful thinking, you’ll probably need a more sophisticated version of the political analysis. I don’t currently think the fact that e.g. labor’s share of income is 60% rather than 0% is the primary reason US democracy doesn’t collapse.
I believe that AGI will have lots of weird effects on the world, just not this particular one. (Also that US democracy is reasonably likely to collapse at some point, just not for this particular reason or in this particular way.)
When we will have AGI, humanity will be collectively a “king” of sorts. I.e. a species that for some reason rules other, strictly superior species. So, it would really help if we’d not have “depose the king” as a strong convergent goal.
I, personally, see the main reason of kings and dictators keeping the power is that kiling/deposing them would lead to collapse of the established order and a new struggle for the power between different parties, with likely worse result for all involved than just letting the king rule.
So, if we will have AIs as many separate sufficiently aligned agent, instead of one “God AI”, then keeping humanity on top will not only match their alignment programming, but also is a guarantie of stability, with alternative being a total AI-vs-AI war.
Ultimately, for example, a state can’t just… kill the vast majority of its population. It would collapse. That creates a need for even the worst tyrannies to somewhat balance their excesses
Unless the economy of the tyranny is mostly based on extracting and selling natural resources, in which case everyone else can be killed without much impact on the economy.
Yeah, there are rather degenerate cases I guess. I was thinking of modern industrialised states with complex economies. Even feudal states with mostly near-subsistence level agriculture peasantry could take a lot of population loss without suffering much (the Black Death depopulated Europe to an insane degree, but society remained fairly functional), but in that case, what was missing was the capability to actually carry out slaughter on an industrial scale. Still, peasant revolt repression could get fairly bloody, and eventually as technology improved some really destructive wars were fought (e.g. the Thirty Years’ War).
In the long run it seems pretty clear labor won’t have any real economic value
I’d love to see a full post on this. It’s one of those statements that rings true since it taps into the underlying trend (at least in the US) where the labor share of GDP has been declining. But *check notes* that was from 65% to 60% and had some upstreaks in there. So it’s also one of those statements that, upon cognitive reflection, also has a lot of ways to end up false: in an economy with labor crowded out by capital, what does the poor class have to offer the capitalists that would provide the basis for a positive return on their investment (or are they...benevolent butchers in the scenario)? Also, this dystopia just comes about without any attempts to regulate the business environment in a way that makes the use of labor more attractive? Like I said, I’d love to see the case for this spelled out in a way that allows for a meaningful debate.
As you can tell from my internal debate above, I agree with the other points—humans have a long history of voluntarily crippling our technology or at least adapting to/with it.
In the long run I think it’s extremely likely you can make machines that can do anything a human can do, at well below human subsistence prices. That’s a claim about the physical world. I think it’s true because humans are just machines built by biology, there’s strong reason to think we can ultimately build similar machines, and the actual energy and capital cost of a machine to replace human labor would be well below human subsistence. This is all discussed a lot but hopefully not super controversial.
If you grant that, then humans may still pay other humans to do stuff, or may still use their political power to extract money that they give to laborers. But the actual marginal value of humans doing tasks in the physical world is really low.
in an economy with labor crowded out by capital, what does the poor class have to offer the capitalists that would provide the basis for a positive return on their investment
I don’t understand this. I don’t think you can get money with your hands or mind, the basis for a return on investment is that you own productive capital.
Also, this dystopia just comes about without any attempts to regulate the business environment in a way that makes the use of labor more attractive?
It’s conceivable that we can make machines that are much better than humans, but that we make their use illegal. I’m betting against for a variety of reasons: jurisdictions that took this route would get badly outcompeted and so it would require strong global governance; it would be bad for human welfare and this fact would eventually become clear; it would disadvantage capitalists and other elites who have a lot of political power.
In the long run it seems pretty clear labor won’t have any real economic value. It seems like the easiest way for everyone to benefit is for states to either own capital themselves or tax capital, and use the proceeds to benefit citizens. (You could also have sufficiently broad capital ownership, but that seems like a heavier lift from here.)
I’m not sure why you call this “relying on the benevolence of the butcher.” Typically states collect taxes using the threat of force, not by relying on companies to be benevolent. (If states own capital then they aren’t even using the threat of force.)
Maybe you mean the citizens are relying on the benevolence of the state? But in a democracy they do retain formal power via voting, which is not really benevolence. Governance is harder in a world without revolution or coups as a release valve, but I’m not sure it’s qualitatively different from the modern situation. In some theoretical sense the US military could say “screw the voters” and just kill them and take their stuff, and that would indeed become easier if a bunch of humans in the military didn’t have to go along with the plan. But it seems like the core issue here is transferring currently-implicit checks and balances to a world with aligned AI. I don’t think this requires crippling the tech at all, just being careful about checks and balances so that an army which nominally works on behalf of the voters actually does so.
Maybe you mean that companies that make AI systems and robots could in aggregate just overthrow the government rather than pay taxes? (That sounds like what you mean by “leave everyone else to fend for themselves,” though presumably they also have to steal or own all the natural resources or else the rest of the world would just build AGI later, so I am thinking of this more as a violent takeover rather than peaceful secession.) That’s true in some sense, but it seems fundamentally similar to the modern situation—US defense contractors could in some theoretical sense supply a paramilitary and use their monopoly to overthrow the US government, but that’s not even close to being feasible in practice. Most of the fundamental dynamics that prevent strong paramilitaries today seem like they apply just as well. There are plenty of mechanisms other than “cryptographic veto” by which we can try to build a military that is effectively controlled by the civilian government.
It seems to me like there are interesting challenges in the world with AI:
The current way we tax capital gains is both highly inefficient and unlikely to generate much revenue. I think there are much better options, but tax policy seems unlikely to change enough to handle AI until it becomes absolutely necessary. If we fail to solve this then median incomes could fall very far below average income.
Right now the possibility of revolutions or coups seems like an important sanity check on political systems, and involving lots of humans is an important part of how we do checks and balances. Aligned AI would greatly increase the importance of formal chains of command and formal systems of governance, which might require more robust formal checks and balances.
It’s qualitatively harder for militaries to verify AI products than physical weapons. Absent alignment this seems like a dealbreaker, since militaries can’t use AI without a risk of coup, but even with alignment it is a challenging institutional problem.
Part of how we prevent violent revolutions is that they require a lot of humans to break the law, and it may be easier to coordinate large-scale law-breaking with AI. This seems like a law enforcement problem that we will need to confront for a variety of reasons.
I don’t think it’s right to think of this as “no paths out of the trap;” more like “there are a lot of ways society would need to adapt to AI in order to achieve outcomes that would be broadly considered desirable.”
Our national security infrastructure relies on the detail that, in order for PMCs or anyone else to create those paramilitaries and overthrow the government with them, they would have to organize lots of different people, in secret. An AI army doesn’t snitch, and so a single person in full control of a AI military would be able to seize power Myanmar style without worrying about either the FBI finding out beforehand or whether or not the public goes along. That’s the key difference.
This. In a broader sense, all our current social structures rely on the notion that no man can be an island. No matter how many weapons and tools you can accumulate, if it’s just you and you can’t persuade anyone to work for you, all you have is a bunch of scrap metal. Computers somewhat change that, as do nuclear weapons, but there are limits to those things too still. Social bonds, deals, compromises, exchanges and contracts remain fundamental. They may be skewed sometimes by power asymmetries, but they can’t be completely done without.
AGI and robotics together would allow you to do without. All you need is to be personally keyed in to the AGI (have some kind of password or key so that it will only accept your orders, for example), and suddenly you can wield the strength and intelligence of millions as if it were your own. I don’t think the transformative effect of that can be understated. Even if we kept the current structures for a while, they’d merely be window dressing. They would not be necessary unless we find a way to bake that necessity in, and if we don’t, then they will in time fall (unless the actual ASI takeover comes first, I guess).
Well until the AGI with ‘the strength and intelligence of millions’ overthrows their nominal ‘owner’. Which I imagine would probably be within a short interval after being ‘keyed in’.
Yeah, the entire premise of this post was a world in which for whatever reason AGI caps at near human or even slightly subhuman level. Good enough to be a controllable worker but not to straight up outwit the entirety of the human species. If you get powerful ASI and an intelligence explosion, then anything goes.
I think it’s easier to have a coup or rebellion in a world where you don’t have to coordinate a lot of people. (I listed that as my change #4, I think it’s very important though there are other more salient short-term consequences for law enforcement.)
But I don’t think this is the only dynamic that makes a revolution hard. For example, governments have the right and motivation to prevent rich people from building large automated armies that could be used to take over.
I agree that right now those efforts rely a lot on the difficulty of coordinating a lot of people. But I suspect that even today if Elon Musk was building thousands of automated tanks for his own purposes the federal government would become involved. And if the defense establishment actually thought it was possible that Elon Musk’s automated army would take over the country, then the level of scrutiny would be much higher.
I’m not sure exactly where the disagreement is—do you think the defense establishment wouldn’t realize the possibility of an automated paramilitary? That they would be unable to monitor well enough to notice, or have the political power to impose safeguards?
Aligned AI makes it much easier to build armies that report to a single person, but also make it much easier to ensure your AI follows the law.
My general thinking is just “once you set up a set of economic incentives, the world runs downhill from there to optimize for those”. What form does that specifically take depends on initial conditions and a lot of contingent details, but I’m not too worried about that if the overall shape of the result is similar.
So to entertain your scenario, suppose you had AGI, and immediately the US military started forming up their own robot army with it, keyed in to the head of state. In this scenario, thanks to securing it early on, the state also becomes one of the big players (though they still likely depend on companies for assistance and maintenance).
The problem isn’t who, specifically, the big players are. The problem is that most people won’t be part of them.
In the end, corporations extracting resources with purely robotic workforces, corporations making luxuries with purely robotic workforces, a state maintaining a monopoly of violence with a purely robotic army—none of these have any need or use for the former working class. They’ll just be hangers on. You can give them an UBI with which they then pay your products so they can keep on living, but what’s the point? The UBI comes from your money, might as well give them the products anyway. The productive forces are solidly in the hands of a few, and they have absolute control over them. Everyone else is practically useless. Neither the state nor the corporations have any need for them, nor reason to fear them. Someone with zero leverage will inevitably become irrelevant, eventually. I suppose you could postulate this not happening if AGI manages to maintain such a spectacular growth rate that no one’s individual greed can possibly absorb it all, and it just has to trickle down out of sheer abundance. Or maybe if people started colonising space, and thus a few human colonists had to be sent out with each expedition as supervisors, providing a valve and something for people to actually do that puts them in the position of being able to fend for themselves autonomously.
What exactly is the “economic incentive” that keeps the capitalist in power in the modern world, given that all they have is a piece of paper saying that they “own” the factory or the farm? It seems like you could make an isomorphic argument for an inevitable proletarian revolution, and in fact I’d find it more intuitively persuasive than what you are saying here. But in fact it’s easy to have systems of power which are perpetuated despite being wildly out of line with the real physical importance of each faction.
(Perhaps your analogous story would be that capitalists with legal ownership are mostly disempowered in the modern world, and it’s managers and people with relevant expertise and understanding who inevitably end up with the power? I think there’s something to that, but nevertheless the capitalists do have a lot of formal control and it’s not obviously dwindling.)
I also don’t really think it’s clear that AGI means that capitalists are the only folks who matter in the state of anarchy. Instead it seems like their stuff would just get taken from them. In fact there just doesn’t seem to be any economic incentives at all of the kind of you seem to be gesturing at, any humans are just as economically productive as any others, so the entire game is the self-perpetuating system of power where people who call the shots at time T try to make sure that they keep calling the shots at time T+1. That’s a complicated dynamic and it’s not clear where it goes, but I’m skeptical about this methodology for confidently forecasting it.
And finally, this is all on top of the novel situation that democratic states are nominally responsible to their voters, and that AI makes it radically easier to translate this kind of de jure control into de facto control (by reducing scope for discretion by human agents and generally making it possible to build more robust institutions).
I think the perspective you are expressing here is quite common and I’m not fully understanding or grappling with it. I expect it would be a longer project for me to really understand it (or for you or someone else to really lay it out clearly), which is maybe worth doing at some point but probably not here and probably not by me in particular given that it’s somewhat separate from my day job.
I mentioned this in another comment, but I think there is a major difference. Consider the risk calculation here. The modern working class American might feel like they have a rough deal in terms of housing or healthcare, but overall, they have on average a baseline of material security that is still fairly decent. Meanwhile, what would revolution offer? Huge personal risk to life, huge risk of simply blowing up everything, and at the other end, maybe somewhat better material conditions, or possibly another USSR-like totalitarian nightmare. Like, sure, propaganda in the Cold War really laid it thick on the “communism is bad” notion, but communism really did no favours to itself either. And all of that can only happen if you manage to solve a really difficult coordination problem with a lot of other people who may want different things than you to begin with, because if you don’t, it’s just certain death anyway. So that risk calculus is pretty obvious. To attempt revolution in these conditions you need to be either ridiculously confident in your victory or ridiculously close to starvation.
Meanwhile, an elite that has control over AGI needs nothing of that. Not only do they not risk almost anything personally (they have robots to do the dirty work for them), not only do they face no, or very little, coordination problems (the robots are all loyal, though they might need to ally with some of their peers), but they don’t even need to use violence directly, as they are in a dominant position to begin with, and already hold control over the AGI infrastructure and source code. All they need is lobbying, regulatory capture, and regular economics to slowly shift the situation. This would happen naturally, because suppose you are Robo-Capitalist who produces a lot of A. You can either pay taxes which are used to give UBI to a lot of citizens who then give you your own money back to get some of A, or you can give all of your A to other Robo-Capitalists who produce B, C and D, thus getting exclusive access to their goods, which you need, and avoiding the completely wasteful sink of giving some stuff to poor people. The state also needs to care about your opinions (your A is necessary to maintain its own AGI infrastructure, or it’s just some luxury that politicians enjoy a lot), but not so much about those of the people (if they get uppity the robot soldiers will put them in line anyway), so it is obviously more inclined to indulge corporate interests (it already is in our present day for similar reasons, AGI merely makes things even more extreme). If things get so bad that some people straight up revolt, then you have legitimacy and can claim the moral high ground as you repress them. No risk of your own soldiers turning on you and joining them. Non-capitalists simply go the way of Native Americans: divided and conquered, pushed into enclaves, starved of resources, and decried as violent savages and brutally repressed with superior technology whenever they push back. All of this absolutely risk-free for the elites. It’s not even a choice: it’s just the natural outcome of incentives, unless some stopper is put to them.
This is more of a scenario in which the AGI-powered state becomes totalitarian. Possible as well, but not the trajectory I’d expect from a starting point like the US. It would be more like China. From the USA and similar I’d expect the formation of a state-industrial complex golem that becomes more and more self-contained, while everyone else slowly whittles into irrelevance and eventually dies off or falls into some awful extremely cheap living standards (e.g. wireheaded into a pod).
PMCs are a bad example. My primary concern is not Elon Musk engineering a takeover so much as a clique of military leaders, or perhaps just democracies’ heads of state, taking power using a government-controlled army that has already been automated, probably by a previous administration that wasn’t thinking too hard about safeguards. That’s why I bring up the example of Burma.
An unlikely but representative story of how this happens might be: branches of the U.S. military get automated over the next 10 years probably as AGI contributes to robotics research, “other countries are doing it and we need to stay competitive”, etc. Generals demand and are given broad control over large amounts of forces. A ‘Trump’ (maybe a democrat Trump, who knows) is elected, and makes highly political Natsec appointments. ‘Trump’ isn’t re-elected. He comes up with some argument about how there was widespread voter fraud in Maine and they need a new election, and his faction makes a split decision to launch a coup on that basis. There’s a civil war, and the ’Trump’ists win because much of the command structure of the military has been automated at this point, rebels can’t fight drones, and they really only need a few loyalists to occupy important territory.
I don’t think this is likely to happen to any one country, but when you remove the safeguard of popular revolt and the ability of low level personnel to object, and remove the ability of police agencies to build a case quickly enough, it starts to become concerning that this might happen over the next ~15 years in one or two countries.
Something along those lines, but honestly I’d expect it to happen more gradually. My problem is that right now, the current situation rests on the fact that everyone involved needs everyone else, to a point. We’ve arrived at this arrangement through a lot of turbulent history and conflict. Ultimately, for example, a state can’t just… kill the vast majority of its population. It would collapse. That creates a need for even the worst tyrannies to somewhat balance their excesses, if they’re not going completely insane and essentially committing suicide as a polity (this does sometimes happen). Similarly, companies can only get away with so much mistreatment of workers or pollution before either competition, boycotts, or the long arm of the law (backed by politicians who need their constituents’ votes) get them.
But all this balance is the product of an equilibrium of mutual need. Remove the need, the institutions might survive—for a while, out of inertia. But I don’t think it would be a stable situation. Gradually you’d have everyone realise how they can get away with stuff that they couldn’t get away with before and now suffer no consequences for it, or be able to ignore the consequences.
Similarly, there’s no real reason a king ought to have power. The people could just not listen to him, or execute him. And yet...
If you want to describe a monarch as “relying on the benevolence of the butcher” then I guess sure, I see what you mean. But I’m not yet convinced that this is a helpful frame on how power works or a good way to make forecasts.
A democracy, even with zero value for labor, seems much more stable than historical monarchies or dictatorships. There are fewer plausibly legitimate challengers (and less room for a revolt), there is a better mechanism for handling succession disputes. AI also seems likely to generally increases the stability of formal governance (one of the big things people complain about!).
Another way of putting it is that capitalists are also relying on the benevolence of the butcher, at least in the world of today. Their capital doesn’t physically empower them, 99.9% of what they have is title and the expectation that law enforcement will settle disputes in their favor (and that they can pay security, who again has no real reason to listen to them beyond the reason they would listen to a king). Aligned AI systems may increase the importance of formal power, since you can build machines that reliably do what their designer intended rather than relying on humans to do what they said they’d do. But I don’t think that asymmetrically favors the capitalist (who has on-paper control of their assets) over the government (who has on paper control of the military and the power to tax).
Feudal systems were built on trust. The King had legitimacy with his Lords, who held him as a shared point of reference, someone who would mediate and maintain balance between them. The King had to earn and keep that trust. Kings were ousted or executed when they betrayed that trust. Like, all the time. First that come to my mind would be John Lackland, Charles I, and obviously, most famously, Louis XVI. Feudalism pretty much crumbled once material conditions made it so that it wasn’t necessary nor functional any more, and with it went most kings, or they had to find ways to survive in the new order by changing their role into that of figureheads.
I’m saying building AGI would make the current capitalist democracy obsolete the way industrialization and firearms made feudalism obsolete, and I’m saying the system afterwards wouldn’t be as nice as what we have now.
I think again, the problem here is a balance of risks and trust. No one wants to rock the boat too much, even if rocking the boat might end up benefitting them, because it might also not. It’s why most anti-capitalists who keep pining for a popular revolution are kind of deluding themselves: people wouldn’t just risk their lives while having relative material security for the sake of a possible improvement in their conditions that might also actually just turn out to be a totalitarian nightmare instead. It’s a stupid bet no one would take. Changing conditions would change the risks, and thus the optimal choice. States wouldn’t go against corporations, and corporations wouldn’t go against states, if both are mutually dependent from each other. But both would absolutely screw over the common people completely if they had absolutely nothing to fear or lose from it, which is something that AGI could really cement.
I think if you want to argue that this is a trap with no obvious way out, such that utopian visions are wishful thinking, you’ll probably need a more sophisticated version of the political analysis. I don’t currently think the fact that e.g. labor’s share of income is 60% rather than 0% is the primary reason US democracy doesn’t collapse.
I believe that AGI will have lots of weird effects on the world, just not this particular one. (Also that US democracy is reasonably likely to collapse at some point, just not for this particular reason or in this particular way.)
When we will have AGI, humanity will be collectively a “king” of sorts. I.e. a species that for some reason rules other, strictly superior species. So, it would really help if we’d not have “depose the king” as a strong convergent goal.
I, personally, see the main reason of kings and dictators keeping the power is that kiling/deposing them would lead to collapse of the established order and a new struggle for the power between different parties, with likely worse result for all involved than just letting the king rule.
So, if we will have AIs as many separate sufficiently aligned agent, instead of one “God AI”, then keeping humanity on top will not only match their alignment programming, but also is a guarantie of stability, with alternative being a total AI-vs-AI war.
Unless the economy of the tyranny is mostly based on extracting and selling natural resources, in which case everyone else can be killed without much impact on the economy.
Yeah, there are rather degenerate cases I guess. I was thinking of modern industrialised states with complex economies. Even feudal states with mostly near-subsistence level agriculture peasantry could take a lot of population loss without suffering much (the Black Death depopulated Europe to an insane degree, but society remained fairly functional), but in that case, what was missing was the capability to actually carry out slaughter on an industrial scale. Still, peasant revolt repression could get fairly bloody, and eventually as technology improved some really destructive wars were fought (e.g. the Thirty Years’ War).
I’d love to see a full post on this. It’s one of those statements that rings true since it taps into the underlying trend (at least in the US) where the labor share of GDP has been declining. But *check notes* that was from 65% to 60% and had some upstreaks in there. So it’s also one of those statements that, upon cognitive reflection, also has a lot of ways to end up false: in an economy with labor crowded out by capital, what does the poor class have to offer the capitalists that would provide the basis for a positive return on their investment (or are they...benevolent butchers in the scenario)? Also, this dystopia just comes about without any attempts to regulate the business environment in a way that makes the use of labor more attractive? Like I said, I’d love to see the case for this spelled out in a way that allows for a meaningful debate.
As you can tell from my internal debate above, I agree with the other points—humans have a long history of voluntarily crippling our technology or at least adapting to/with it.
In the long run I think it’s extremely likely you can make machines that can do anything a human can do, at well below human subsistence prices. That’s a claim about the physical world. I think it’s true because humans are just machines built by biology, there’s strong reason to think we can ultimately build similar machines, and the actual energy and capital cost of a machine to replace human labor would be well below human subsistence. This is all discussed a lot but hopefully not super controversial.
If you grant that, then humans may still pay other humans to do stuff, or may still use their political power to extract money that they give to laborers. But the actual marginal value of humans doing tasks in the physical world is really low.
I don’t understand this. I don’t think you can get money with your hands or mind, the basis for a return on investment is that you own productive capital.
It’s conceivable that we can make machines that are much better than humans, but that we make their use illegal. I’m betting against for a variety of reasons: jurisdictions that took this route would get badly outcompeted and so it would require strong global governance; it would be bad for human welfare and this fact would eventually become clear; it would disadvantage capitalists and other elites who have a lot of political power.