I think that sticking to capitalism as an economic system post-singularity would be pretty clearly catastrophic and something to strongly avoid, despite capitalism working pretty well today. I’ve talked about this a bit previously here, but some more notes on why:
Currently, our society requires the labor of sentient beings to produce goods and services. Capitalism incentivizes that labor by providing a claim on society’s overall production in exchange for it. If the labor of sentient beings becomes largely superfluous as an economic input, however, then having a system that effectively incentivizes that labor also becomes largely superfluous.
Currently, we rely on the mechanism of price discovery to aggregate and disseminate information about the optimal allocation of society’s resources. But it’s far from an optimal mechanism for allocating resources, and a superintelligence with full visibility and control could do a much better job of resource allocation without falling prey to common pitfalls of the price mechanism such as externalities.
Capitalism incentivizes the smart allocation of capital in the same way it incentivizes labor. If society can make smart capital allocation decisions without relying on properly incentivized investors, however, then as with labor there’s no reason to keep such an incentive mechanism.
While very large, the total optimization pressure humanity puts into economic competition today would likely pale in comparison to that of a post-singularity future. In the context of such a large increase in optimization pressure, we should generally expect extremal Goodhart failures.
More specifically, competitive dynamics incentivize the reinvestment of all economic proceeds back into resource acquisition lest you be outcompeted by another entity doing so. Such a dynamic results in pushing out actors that reinvest proceeds into the flourishing of sentient beings in exchange for those that disregard any such investment in favor of more resource acquisition.
Furthermore, the proceeds of post-singularity economic expansion flowing to the owners of existing capital is very far from socially optimal. It strongly disfavors future generations, simulated humans, and overall introduces a huge amount of variance into whether we end up with a positive future, putting a substantial amount of control into a set of people whose consumption decisions need not align with the socially optimal allocation.
Capitalism is a complex system with many moving parts, some of which are sometimes assumed to consist of the entirety of what defines it. What kinds of components do you see as being highly unlikely to be included in a successful utopia, and what components could be internal to a well functioning system as long as (potentially-left-unspecified) conditions are met? I could name some kinds of components (eg some kinds of contracts or enforcement mechanisms) that I expect to not be used in a utopia, though I suspect at this point you’ve seen my comments where I get into this, so I’m more interested in what you say without that prompting.
Who’s this “we” you’re talking about? It doesn’t seem to be any actual humans I recognize. As far as I can tell, the basics of capitalism (call it “simple capitalism”) are just what happens when individuals make decisions about resource use. We call it “ownership”, but really any form of resolution of the underlying conflict of preferences would likely work out similarly. That conflict is that humans have unbounded desires, and resources are bounded.
The drive to make goods and services for each other, in pursuit of selfish wants, does incentivize labor, but it’s not because “society requires” it, except in a pretty blurry aggregation of individual “requires”. Price discovery is only half of what market transactions do. The other half is usage limits and resource distribution. These are sides of a coin, and can’t be separated—without limited amounts, there is no price, without prices there is no agent-driven exchange of different kinds of resource.
I’m with you that modern capitalism is pretty unpleasant due to optimization pressure, and due to the easy aggregation of far more people and resources than historically possible, and than human culture was evolved around. I don’t see how the underlying system has any alternative that doesn’t do away with individual desire and consumption. Especially the relative/comparative consumption that seems to drive a LOT of perceived-luxury requirements.
I think some version of distributing intergalactic property rights uniformly (e.g. among humans existing in 2023) combined with normal capitalism isn’t clearly that catastrophic. (The distribution is what you call the egalitarian/democratic solution in the link.)
Maybe you lose about a factor of 5 or 10 over the literally optimal approach from my perspective (but maybe this analysis is tricky due to two envelope problems).
(You probably also need some short term protections to avoid shakedowns etc.)
Are you pessimstic that people will bother reflecting or thinking carefully prior to determing resource utilization or selling their property? I guess I feel like 10% of people being somewhat thoughtful matches the rough current distribution of extremely rich people.
If the situation was something like “current people, weighted by wealth, deliberate for a while on what to do with our resources” then I agree that’s probably like 5 − 10 times worse than the best approach (which is still a huge haircut) but not clearly catastrophic. But it’s not clear to me that’s what the default outcome of competitive dynamics would look like—sufficiently competitive dynamics could force out altruistic actors if they get outcompeted by non-altruistic actors.
I think one crux between you and I, at least, is that you see this as a considered division of how to divide resources, and I see it as an equilibrium consensus/acceptance of what property rights to enforce in maintenance, creation, and especially transfer of control/usage of resources. You think of static division, I think of equilibria and motion. Both are valid, but experience and resource use is ongoing and it must be accounted for.
I’m happy that the modern world generally approves of self-ownership: a given individual gets to choose what to do (within limits, but it’s nowhere near the case that my body and mind are part of the resources allocated by whatever mechanism is being considered). It’s generally considered an alignment failure if individual will is just a resource that the AI manages. Physical resources (and behavioral resources, which are a sale of the results of some human efforts, a distinct resource from the mind performing the action) are generally owned by someone, and they trade some results to get the results of other people’s resources (including their labor and thought-output).
There could be a side-mechanism for some amount of resources just for existing, but it’s unlikely that it can be the primary transfer/allocation mechanism, as long as individuals have independent and conflicting desires. Current valuable self-owned products (office work, software design, etc.) probably reduces in value a lot. If all human output becomes valueless (in the “tradable for other desired things or activities” sense of valuable), I don’t think current humans will continue to exist.
Wirehead utopia (including real-world “all desires fulfilled without effort or trade”) doesn’t sound appealing or workable for what I know of my own and general human psychology.
self-ownership: a given individual gets to choose what to do (within limits, but it’s nowhere near the case that my body and mind are part of the resources allocated by whatever mechanism is being considered)
for most people, this is just the right to sell their body to the machine. better than being forced at gunpoint, but being forced to by an empty fridge is not that much better, especially with monopoly accumulation as the default outcome. I agree that being able to mark ones’ selfhood boundaries with property contracts is generally good, but the ability to massively expand ones’ property contracts to exclude others from resource access is effectively a sort of scalping—sucking up resources so as to participate in an emergent cabal of resource withholding. In other words,
It’s generally considered an alignment failure if individual will is just a resource that the AI manages.
The core argument that there’s something critically wrong with capitalism is that the stock market has been an intelligence aggregation system for a long time and has a strong tendency to suck up the air in the system.
Utopia would need to involve a load balancing system that can prevent sucking-up-the-air type resource control imbalancing, so as to prevent
for most people, this is just the right to sell their body to the machine.
I think this is a big point of disagreement. For most people, there’s some amount of time/energy that’s sold to the machine, and it’s NOWHERE EVEN CLOSE to selling their actual asset (body and mind). There’s a LOT of leisure time, and a LOT of freedom even within work hours, and the choice to do something different tomorrow. It may not be as rewarding, but it’s available and the ability to make those decisions has not been sold or taken.
yeah like, above a certain level of economic power that’s true, but the overwhelming majority of humans are below that level, and AI is expected to raise that waterline. it’s kind of the primary failure mode I expect.
I mean, the 40 hour work week movement did help a lot. But it was an instance of a large push of organizing to demand constraint on what the aggregate intelligence (which at the time was the stock market—which is a trade market of police-enforceable ownership contracts), could demand of people who were not highly empowered. And it involved leveling a lopsided playing field by things that one side considered dirty tricks, such as strikes. I don’t think that’ll help against AI, to put it lightly.
To be clear, I recognize that your description is accurate for a significant portion of people. But it’s not close to the majority, and movement towards making it the majority has historically demanded changing the enforceable rules in a way that would reliably constrain the aggregate agency of the high dimensional control system steering the economy. When we have a sufficiently much more powerful one of those is when we expect failure, and right now it doesn’t seem to me that there’s any movement on a solution to that. We can talk about “oh we need something better than capitalism” but the problem with the stock market is simply that it’s enforceable prediction, thereby sucking up enough air from the room that a majority of people do not get the benefits you’re describing. If they did, then you’re right, it would be fine!
I mean, also there’s this, but somehow I expect that that won’t stick around long after robots are enough cheaper than humans
I think we’re talking past each other a bit. It’s absolutely true that the vast majority historically and, to a lesser extent, in modern times, are pretty constrained in their choices. This constraint is HIGHLY correlated with distance from participation in voluntary trade (of labor or resources).
I think the disconnect is the word “capitalism”—when you talk about stock markets and price discovery, that says to me you’re thinking of a small part of the system. I fully agree that there are a lot of really unpleasant equilibra with the scale and optimization pressure of the current legible financial world, and I’d love to undo a lot of it. But the underlying concept of enforced and agreed property rights and individual human decisions is important to me, and seems to be the thing that gets destroyed first when people decry capitalism.
Ok, it sounds, even to me, like “The heads. You’re looking at the heads. Sometimes he goes too far. He’s the first one to admit it.” But really, I STRONGLY expect that I am experiencing peak human freedom RIGHT NOW (well, 20 years ago, but it’s been rather flat for me and my cultural peers for a century, even if somewhat declining recently), and capitalism (small-c, individual decisions and striving, backed by financial aggregation with fairly broad participation) has been a huge driver of that. I don’t see any alternatives that preserve the individuality of even a significant subset of humanity.
If property rights to the stars are distributed prior to this, why does this competition cause issues? Maybe you basically agree here, but think it’s unlikely property will be distributed like this.
Separately, for competitive dynamics with reasonable rule of law and alignment ~solved, why do you think the strategy stealing assumption won’t apply? (There are a bunch of possible objections here, just wondering what your’s is. Personally I think strategy stealing is probably fine if the altruistic actors care about the long run and are strategic.)
I think that sticking to capitalism as an economic system post-singularity would be pretty clearly catastrophic and something to strongly avoid, despite capitalism working pretty well today. I’ve talked about this a bit previously here, but some more notes on why:
Currently, our society requires the labor of sentient beings to produce goods and services. Capitalism incentivizes that labor by providing a claim on society’s overall production in exchange for it. If the labor of sentient beings becomes largely superfluous as an economic input, however, then having a system that effectively incentivizes that labor also becomes largely superfluous.
Currently, we rely on the mechanism of price discovery to aggregate and disseminate information about the optimal allocation of society’s resources. But it’s far from an optimal mechanism for allocating resources, and a superintelligence with full visibility and control could do a much better job of resource allocation without falling prey to common pitfalls of the price mechanism such as externalities.
Capitalism incentivizes the smart allocation of capital in the same way it incentivizes labor. If society can make smart capital allocation decisions without relying on properly incentivized investors, however, then as with labor there’s no reason to keep such an incentive mechanism.
While very large, the total optimization pressure humanity puts into economic competition today would likely pale in comparison to that of a post-singularity future. In the context of such a large increase in optimization pressure, we should generally expect extremal Goodhart failures.
More specifically, competitive dynamics incentivize the reinvestment of all economic proceeds back into resource acquisition lest you be outcompeted by another entity doing so. Such a dynamic results in pushing out actors that reinvest proceeds into the flourishing of sentient beings in exchange for those that disregard any such investment in favor of more resource acquisition.
Furthermore, the proceeds of post-singularity economic expansion flowing to the owners of existing capital is very far from socially optimal. It strongly disfavors future generations, simulated humans, and overall introduces a huge amount of variance into whether we end up with a positive future, putting a substantial amount of control into a set of people whose consumption decisions need not align with the socially optimal allocation.
Capitalism is a complex system with many moving parts, some of which are sometimes assumed to consist of the entirety of what defines it. What kinds of components do you see as being highly unlikely to be included in a successful utopia, and what components could be internal to a well functioning system as long as (potentially-left-unspecified) conditions are met? I could name some kinds of components (eg some kinds of contracts or enforcement mechanisms) that I expect to not be used in a utopia, though I suspect at this point you’ve seen my comments where I get into this, so I’m more interested in what you say without that prompting.
Who’s this “we” you’re talking about? It doesn’t seem to be any actual humans I recognize. As far as I can tell, the basics of capitalism (call it “simple capitalism”) are just what happens when individuals make decisions about resource use. We call it “ownership”, but really any form of resolution of the underlying conflict of preferences would likely work out similarly. That conflict is that humans have unbounded desires, and resources are bounded.
The drive to make goods and services for each other, in pursuit of selfish wants, does incentivize labor, but it’s not because “society requires” it, except in a pretty blurry aggregation of individual “requires”. Price discovery is only half of what market transactions do. The other half is usage limits and resource distribution. These are sides of a coin, and can’t be separated—without limited amounts, there is no price, without prices there is no agent-driven exchange of different kinds of resource.
I’m with you that modern capitalism is pretty unpleasant due to optimization pressure, and due to the easy aggregation of far more people and resources than historically possible, and than human culture was evolved around. I don’t see how the underlying system has any alternative that doesn’t do away with individual desire and consumption. Especially the relative/comparative consumption that seems to drive a LOT of perceived-luxury requirements.
I think some version of distributing intergalactic property rights uniformly (e.g. among humans existing in 2023) combined with normal capitalism isn’t clearly that catastrophic. (The distribution is what you call the egalitarian/democratic solution in the link.)
Maybe you lose about a factor of 5 or 10 over the literally optimal approach from my perspective (but maybe this analysis is tricky due to two envelope problems).
(You probably also need some short term protections to avoid shakedowns etc.)
Are you pessimstic that people will bother reflecting or thinking carefully prior to determing resource utilization or selling their property? I guess I feel like 10% of people being somewhat thoughtful matches the rough current distribution of extremely rich people.
If the situation was something like “current people, weighted by wealth, deliberate for a while on what to do with our resources” then I agree that’s probably like 5 − 10 times worse than the best approach (which is still a huge haircut) but not clearly catastrophic. But it’s not clear to me that’s what the default outcome of competitive dynamics would look like—sufficiently competitive dynamics could force out altruistic actors if they get outcompeted by non-altruistic actors.
I think one crux between you and I, at least, is that you see this as a considered division of how to divide resources, and I see it as an equilibrium consensus/acceptance of what property rights to enforce in maintenance, creation, and especially transfer of control/usage of resources. You think of static division, I think of equilibria and motion. Both are valid, but experience and resource use is ongoing and it must be accounted for.
I’m happy that the modern world generally approves of self-ownership: a given individual gets to choose what to do (within limits, but it’s nowhere near the case that my body and mind are part of the resources allocated by whatever mechanism is being considered). It’s generally considered an alignment failure if individual will is just a resource that the AI manages. Physical resources (and behavioral resources, which are a sale of the results of some human efforts, a distinct resource from the mind performing the action) are generally owned by someone, and they trade some results to get the results of other people’s resources (including their labor and thought-output).
There could be a side-mechanism for some amount of resources just for existing, but it’s unlikely that it can be the primary transfer/allocation mechanism, as long as individuals have independent and conflicting desires. Current valuable self-owned products (office work, software design, etc.) probably reduces in value a lot. If all human output becomes valueless (in the “tradable for other desired things or activities” sense of valuable), I don’t think current humans will continue to exist.
Wirehead utopia (including real-world “all desires fulfilled without effort or trade”) doesn’t sound appealing or workable for what I know of my own and general human psychology.
for most people, this is just the right to sell their body to the machine. better than being forced at gunpoint, but being forced to by an empty fridge is not that much better, especially with monopoly accumulation as the default outcome. I agree that being able to mark ones’ selfhood boundaries with property contracts is generally good, but the ability to massively expand ones’ property contracts to exclude others from resource access is effectively a sort of scalping—sucking up resources so as to participate in an emergent cabal of resource withholding. In other words,
The core argument that there’s something critically wrong with capitalism is that the stock market has been an intelligence aggregation system for a long time and has a strong tendency to suck up the air in the system.
Utopia would need to involve a load balancing system that can prevent sucking-up-the-air type resource control imbalancing, so as to prevent
I think this is a big point of disagreement. For most people, there’s some amount of time/energy that’s sold to the machine, and it’s NOWHERE EVEN CLOSE to selling their actual asset (body and mind). There’s a LOT of leisure time, and a LOT of freedom even within work hours, and the choice to do something different tomorrow. It may not be as rewarding, but it’s available and the ability to make those decisions has not been sold or taken.
yeah like, above a certain level of economic power that’s true, but the overwhelming majority of humans are below that level, and AI is expected to raise that waterline. it’s kind of the primary failure mode I expect.
I mean, the 40 hour work week movement did help a lot. But it was an instance of a large push of organizing to demand constraint on what the aggregate intelligence (which at the time was the stock market—which is a trade market of police-enforceable ownership contracts), could demand of people who were not highly empowered. And it involved leveling a lopsided playing field by things that one side considered dirty tricks, such as strikes. I don’t think that’ll help against AI, to put it lightly.
To be clear, I recognize that your description is accurate for a significant portion of people. But it’s not close to the majority, and movement towards making it the majority has historically demanded changing the enforceable rules in a way that would reliably constrain the aggregate agency of the high dimensional control system steering the economy. When we have a sufficiently much more powerful one of those is when we expect failure, and right now it doesn’t seem to me that there’s any movement on a solution to that. We can talk about “oh we need something better than capitalism” but the problem with the stock market is simply that it’s enforceable prediction, thereby sucking up enough air from the room that a majority of people do not get the benefits you’re describing. If they did, then you’re right, it would be fine!
I mean, also there’s this, but somehow I expect that that won’t stick around long after robots are enough cheaper than humans
I think we’re talking past each other a bit. It’s absolutely true that the vast majority historically and, to a lesser extent, in modern times, are pretty constrained in their choices. This constraint is HIGHLY correlated with distance from participation in voluntary trade (of labor or resources).
I think the disconnect is the word “capitalism”—when you talk about stock markets and price discovery, that says to me you’re thinking of a small part of the system. I fully agree that there are a lot of really unpleasant equilibra with the scale and optimization pressure of the current legible financial world, and I’d love to undo a lot of it. But the underlying concept of enforced and agreed property rights and individual human decisions is important to me, and seems to be the thing that gets destroyed first when people decry capitalism.
Ok, it sounds, even to me, like “The heads. You’re looking at the heads. Sometimes he goes too far. He’s the first one to admit it.” But really, I STRONGLY expect that I am experiencing peak human freedom RIGHT NOW (well, 20 years ago, but it’s been rather flat for me and my cultural peers for a century, even if somewhat declining recently), and capitalism (small-c, individual decisions and striving, backed by financial aggregation with fairly broad participation) has been a huge driver of that. I don’t see any alternatives that preserve the individuality of even a significant subset of humanity.
If property rights to the stars are distributed prior to this, why does this competition cause issues? Maybe you basically agree here, but think it’s unlikely property will be distributed like this.
Separately, for competitive dynamics with reasonable rule of law and alignment ~solved, why do you think the strategy stealing assumption won’t apply? (There are a bunch of possible objections here, just wondering what your’s is. Personally I think strategy stealing is probably fine if the altruistic actors care about the long run and are strategic.)