Hang on—before we were assuming that the Robots (ems) were consumers. Here we’re assuming the opposite, that humans and only humans consume. Therefore the consumption basket can’t go haywire.
Actually one way things could go wrong would be if an “elite” group of humans took the place of ems, and consumed 99.999999999999999999999% of output. So in order for things to be OK, economic disparity has to remain non-insanely-high. But even the modest taxes that we have today, plus wealth redistribution, would ensure this, and it seems that there would be stronger incentives to increase wealth redistribution than to decrease it.
Hang on—before we were assuming that the Robots (ems) were consumers. Here we’re assuming the opposite, that humans and only humans consume. Therefore the consumption basket can’t go haywire.
What counts as “consumption” is a matter of definition, not fact. Even if you book the “consumption” by machines as capital investment or intermediate goods purchases, it’s still there, and if machines play an increasingly prominent role, it can significantly influence the prices of goods that humans consume. With machines that approach human levels of intelligence and take over increasingly intelligent and dexterous human jobs, this difference will become an increasingly fictional accounting convention.
Land rent is another huge issue. Observe the present situation: food and clothing are nowadays dirt cheap, and unlike in the past, starving or having to go around without a warm coat in the winter are no longer realistic dangers no matter how impoverished you get. Yet, living space is not much more affordable relative to income than in the past, and becoming homeless remains a very realistic threat. And if you look at the interest rates versus prices, you’ll find that the interest on a fairly modest amount would nowadays be enough to feed and clothe yourself adequately enough to survive—but not to afford an adequate living space. (Plus, the present situation isn’t that bad because you can loiter in public spaces, but in a future of soaring land rents, these will likely become much more scarce. Humans require an awful lot of space to subsist tolerably.)
So in order for things to be OK, economic disparity has to remain non-insanely-high.
When it comes to the earnings from rent and interest, the present economic disparity is already insanely high. What makes it non-insanely-high overall is the fact that labor can be sold for a high price—and we’re discussing the scenario where this changes.
I’ll certainly agree that poorer humans might run out of land that’s all owned by a few rich humans. If the value of labor dropped to zero, then land ownership would become critically important, as it is one of the few resources that are essentially not producable, and therefore the who-owns-the-land game is zero sum.
But is land really unproducable in this scenario? Remember, we’re assuming very high levels of technology. Maybe the poorer humans would all end up as seasteaders or desert dwellers?
What about the possibility of producing land underground?
What about producing land in space?
The bottom line seems to be that our society will have to change drastically in many ways, but that the demise of the need for human labor would be a good thing overall.
But is land really unproducable in this scenario? Remember, we’re assuming very high levels of technology. Maybe the poorer humans would all end up as seasteaders or desert dwellers?
What about the possibility of producing land underground?
What about producing land in space?
Given a well-organized and generous system of redistribution, the situation actually wouldn’t be that bad. Despite all the silly panicking about overpopulation, the Earth is a pretty big place. To get some perspective, at the population density of Singapore, ten billion people could fit into roughly 1% of the total world land surface area. This is approximately the size of the present-day Mongolia. With the population density of Malta—hardly a dystopian metropolis—they’d need about 5% of the Earth’s land, i.e. roughly the area of the continental U.S.
Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society. (Of course, it is first necessary to dispel the delusion—widely taken as a fundamental article of faith among economists—that technological advances can never render great masses of people unemployable.)
Now, you write:
The bottom line seems to be that our society will have to change drastically in many ways, but that the demise of the need for human labor would be a good thing overall.
I’m not at all sure of that. I hate to sound elitist, but I suspect that among the common folk, a great many people would not benefit from the liberation from the need to work. Just look at how often lottery winners end up completely destroying their lives, or what happens in those social environments where living off handouts becomes the norm. It seems to me that many, if not most people need a clear schedule of productive work around which they can organize their lives, and lacking it become completely disoriented and self-destructive. The old folk wisdom that idle hands are the devil’s tools has at least some truth in it.
This is one reason why I’m skeptical of redistribution as the solution, even under the assumption that it will be organized successfully.
Re: “Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society.”
Organic humans becoming functionally redundant is likely to be the beginning of the end for them. They may well be able to persist for a while as useless parasites on an engineered society—but any ultimate hope for becoming something other than entities of historical interest would appear to lie with integration into that society—and that would take a considerable amount of “adjustment”.
It seems to me that many, if not most people need a clear schedule of productive work around which they can organize their lives, and lacking it become completely disoriented and self-destructive.
I think that that would just be another service or product that people purchased. Be it in the form of cognitive enhancement, voluntary projects or hobbies, etc. In fact lottery winners simply suffer from not being numerous enough to support a lottery-winner rehabilitation industry.
I agree that such optimistic scenarios are possible; my gloomy comments aren’t meant to prophesy certain doom, but rather to shake what I perceive as an unwarrantably high level of optimism and lack of consideration for certain ugly but nevertheless real possibilities.
Still, one problem I think is particularly underestimated in discussions of this sort is how badly the law of unintended consequences can bite whenever it comes to the practical outcomes of large-scale social changes and interventions. This could be especially relevant in future scenarios where the consequences of the disappearing demand for human labor are remedied with handouts and redistribution. Even if we assume that such programs will be successfully embarked upon (which is by no means certain), it is a non-trivial question what other conditions will have to be satisfied for the results to be pretty, given the existing experiences with somewhat analogous situations.
Re: “before we were assuming that the Robots (ems) were consumers. Here we’re assuming the opposite, that humans and only humans consume.”
More accurately, Martin Ford was assuming that—and I was pointing out that trucks, fridges, washing machines, etc. are best modelled as consumers too—since they consume valuable low-entropy resources—and spit out useless waste products.
The idea that machines don’t participate in the economy as consumers is not a particularly useful one. Machines—and companies—buy things, sell things, consume things—and generally do participate. Those machines that don’t buy things have things bought for them on their behalf (by companies or humans) - and the overall effect on economic throughput is much the same as if the machines were buying things themselves.
If you really want to ignore direct consumption by machines—and pretend that the machines are all working exclusively for humans, doing our bidding precisely—then you have GOT to account for people and companies buying things for the machines that they manange—or your model badly loses touch with reality.
In practice, it is best to just drop the assumption. Computer viruses / chain letters are probably the most obvious illustration of the problem with the idea that machines are exclusively “on our side”, labour on our behalf, and have no interests of their own.
The mis-handling of this whole issue is one of the problems with “The Lights in the Tunnel”.
Would this analysis apply to the ecosystem as a whole? Should we think of fungus as consuming low entropy plant waste and spitting out higher entropy waste products? Is a squirrel eating an acorn part of the economy?
Machines, as they currently exists, have no interests of their own. Any “interests” they may appear to have are as real as the “interest” gas molecules have in occupying a larger volume when the temperature increases. Computer viruses are simply a way that machines malfunction. The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values. Nothing does.
Not without some changes; yes—and: not part of the human economy.
Various machines certainly behave in goal-directed ways—and so have what can usefully be described as “vested interests”—along the lines described here:
Can you say what you mean by “interests”? Probably any difference of opinion here is a matter of differing definitions—and so is not terribly interesting.
Re: “The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values.”
That wasn’t what I meant—what I meant is that they don’t completely share human values—not that they don’t fulfill them.
By interests, I mean concerns related to fulfilling values. For the time being, I consider human minds to be the only entities complex enough to have values. For example, it is very useful to model a cancer cell as having the goal of replicating, but I don’t consider it to have replicating as a value.
The cancer example also shows that our own cells don’t fulfill or share our values, and yet we still model the consumption of cancer cells as the consumption of a human being.
If you really want to ignore direct consumption by machines—and pretend that the machines are all working exclusively for humans, doing our bidding precisely—then you have GOT to account for people and companies buying things for the machines that they manange—or your model badly loses touch with reality.
I think I might have the biggest issue with this line. Nobody is pretending that machines are all working exclusively for humans, no more than we pretend our cells are working exclusively for us. The idea is that we account for the machine consumption the same way we account for the consumption of our own cells, by attributing it to the human consumers.
The idea being criticised is that—if a few humans dominate the economy by commanding huge armies of robot minions, then—without substantial taxation—the economy will grind to a halt—since hardly any humans are earning any money, and so therefore hardly any humans are spending any money.
The problem with that is that the huge armies of robot minions are consuming vast quantities of material while competing with each other for resources—and the purchase of all those goods is not being accounted for anywhere in the model—apparently because of the ideas that only humans are consumers and most humans are unemployed .
It seems like a fairly straightforwards modelling mistake to me. The purchase of robot fuel and supplies has GOT to be accounted for. Account for it as mega-spending by the human managing director if you really must—but account for it somewhere. As soon as you do that, the whole idea that increassed automation leads to financial meltdown vanishes like a mirage.
We already have a pretty clear idea about the effect of automation on the economy—from Japan and South Korea. The machines do a load of work, and their bodies need feeding—creating demand for raw materials and fuel—and the economy is boosted.
How does needing raw materials create employment for the rest of the population? If everything is mechanized, then raw materials come from those who own mines/wells, and the extraction is done by robot labor. That doesn’t involve very many people.
It doesn’t create employment for the rest of the humans. In this scenario, most humans are unemployed—and probably rather poor—due to the hypothesised lack of “substantial taxation” and government handouts. The throughput of the economy arises essentially from the efforts of the machines.
Hang on—before we were assuming that the Robots (ems) were consumers. Here we’re assuming the opposite, that humans and only humans consume. Therefore the consumption basket can’t go haywire.
Actually one way things could go wrong would be if an “elite” group of humans took the place of ems, and consumed 99.999999999999999999999% of output. So in order for things to be OK, economic disparity has to remain non-insanely-high. But even the modest taxes that we have today, plus wealth redistribution, would ensure this, and it seems that there would be stronger incentives to increase wealth redistribution than to decrease it.
Roko:
What counts as “consumption” is a matter of definition, not fact. Even if you book the “consumption” by machines as capital investment or intermediate goods purchases, it’s still there, and if machines play an increasingly prominent role, it can significantly influence the prices of goods that humans consume. With machines that approach human levels of intelligence and take over increasingly intelligent and dexterous human jobs, this difference will become an increasingly fictional accounting convention.
Land rent is another huge issue. Observe the present situation: food and clothing are nowadays dirt cheap, and unlike in the past, starving or having to go around without a warm coat in the winter are no longer realistic dangers no matter how impoverished you get. Yet, living space is not much more affordable relative to income than in the past, and becoming homeless remains a very realistic threat. And if you look at the interest rates versus prices, you’ll find that the interest on a fairly modest amount would nowadays be enough to feed and clothe yourself adequately enough to survive—but not to afford an adequate living space. (Plus, the present situation isn’t that bad because you can loiter in public spaces, but in a future of soaring land rents, these will likely become much more scarce. Humans require an awful lot of space to subsist tolerably.)
When it comes to the earnings from rent and interest, the present economic disparity is already insanely high. What makes it non-insanely-high overall is the fact that labor can be sold for a high price—and we’re discussing the scenario where this changes.
I’ll certainly agree that poorer humans might run out of land that’s all owned by a few rich humans. If the value of labor dropped to zero, then land ownership would become critically important, as it is one of the few resources that are essentially not producable, and therefore the who-owns-the-land game is zero sum.
But is land really unproducable in this scenario? Remember, we’re assuming very high levels of technology. Maybe the poorer humans would all end up as seasteaders or desert dwellers?
What about the possibility of producing land underground?
What about producing land in space?
The bottom line seems to be that our society will have to change drastically in many ways, but that the demise of the need for human labor would be a good thing overall.
Roko:
Given a well-organized and generous system of redistribution, the situation actually wouldn’t be that bad. Despite all the silly panicking about overpopulation, the Earth is a pretty big place. To get some perspective, at the population density of Singapore, ten billion people could fit into roughly 1% of the total world land surface area. This is approximately the size of the present-day Mongolia. With the population density of Malta—hardly a dystopian metropolis—they’d need about 5% of the Earth’s land, i.e. roughly the area of the continental U.S.
Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society. (Of course, it is first necessary to dispel the delusion—widely taken as a fundamental article of faith among economists—that technological advances can never render great masses of people unemployable.)
Now, you write:
I’m not at all sure of that. I hate to sound elitist, but I suspect that among the common folk, a great many people would not benefit from the liberation from the need to work. Just look at how often lottery winners end up completely destroying their lives, or what happens in those social environments where living off handouts becomes the norm. It seems to me that many, if not most people need a clear schedule of productive work around which they can organize their lives, and lacking it become completely disoriented and self-destructive. The old folk wisdom that idle hands are the devil’s tools has at least some truth in it.
This is one reason why I’m skeptical of redistribution as the solution, even under the assumption that it will be organized successfully.
Re: “Therefore, assuming the powers-that-be would be willing to do so, in a super-high-tech regime several billion unproductive people could be supported in one or more tolerably dense enclaves at a relatively low opportunity cost. The real questions are whether the will to do so will exist, what troubles might ensue during the transition, and whether these unproductive billions will be able to form a tolerably functional society.”
Organic humans becoming functionally redundant is likely to be the beginning of the end for them. They may well be able to persist for a while as useless parasites on an engineered society—but any ultimate hope for becoming something other than entities of historical interest would appear to lie with integration into that society—and that would take a considerable amount of “adjustment”.
I think that that would just be another service or product that people purchased. Be it in the form of cognitive enhancement, voluntary projects or hobbies, etc. In fact lottery winners simply suffer from not being numerous enough to support a lottery-winner rehabilitation industry.
I agree that such optimistic scenarios are possible; my gloomy comments aren’t meant to prophesy certain doom, but rather to shake what I perceive as an unwarrantably high level of optimism and lack of consideration for certain ugly but nevertheless real possibilities.
Still, one problem I think is particularly underestimated in discussions of this sort is how badly the law of unintended consequences can bite whenever it comes to the practical outcomes of large-scale social changes and interventions. This could be especially relevant in future scenarios where the consequences of the disappearing demand for human labor are remedied with handouts and redistribution. Even if we assume that such programs will be successfully embarked upon (which is by no means certain), it is a non-trivial question what other conditions will have to be satisfied for the results to be pretty, given the existing experiences with somewhat analogous situations.
Re: “before we were assuming that the Robots (ems) were consumers. Here we’re assuming the opposite, that humans and only humans consume.”
More accurately, Martin Ford was assuming that—and I was pointing out that trucks, fridges, washing machines, etc. are best modelled as consumers too—since they consume valuable low-entropy resources—and spit out useless waste products.
The idea that machines don’t participate in the economy as consumers is not a particularly useful one. Machines—and companies—buy things, sell things, consume things—and generally do participate. Those machines that don’t buy things have things bought for them on their behalf (by companies or humans) - and the overall effect on economic throughput is much the same as if the machines were buying things themselves.
If you really want to ignore direct consumption by machines—and pretend that the machines are all working exclusively for humans, doing our bidding precisely—then you have GOT to account for people and companies buying things for the machines that they manange—or your model badly loses touch with reality.
In practice, it is best to just drop the assumption. Computer viruses / chain letters are probably the most obvious illustration of the problem with the idea that machines are exclusively “on our side”, labour on our behalf, and have no interests of their own.
The mis-handling of this whole issue is one of the problems with “The Lights in the Tunnel”.
Would this analysis apply to the ecosystem as a whole? Should we think of fungus as consuming low entropy plant waste and spitting out higher entropy waste products? Is a squirrel eating an acorn part of the economy?
Machines, as they currently exists, have no interests of their own. Any “interests” they may appear to have are as real as the “interest” gas molecules have in occupying a larger volume when the temperature increases. Computer viruses are simply a way that machines malfunction. The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values. Nothing does.
Not without some changes; yes—and: not part of the human economy.
Various machines certainly behave in goal-directed ways—and so have what can usefully be described as “vested interests”—along the lines described here:
http://en.wikipedia.org/wiki/Vested_interest
Can you say what you mean by “interests”? Probably any difference of opinion here is a matter of differing definitions—and so is not terribly interesting.
Re: “The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values.”
That wasn’t what I meant—what I meant is that they don’t completely share human values—not that they don’t fulfill them.
By interests, I mean concerns related to fulfilling values. For the time being, I consider human minds to be the only entities complex enough to have values. For example, it is very useful to model a cancer cell as having the goal of replicating, but I don’t consider it to have replicating as a value.
The cancer example also shows that our own cells don’t fulfill or share our values, and yet we still model the consumption of cancer cells as the consumption of a human being.
I think I might have the biggest issue with this line. Nobody is pretending that machines are all working exclusively for humans, no more than we pretend our cells are working exclusively for us. The idea is that we account for the machine consumption the same way we account for the consumption of our own cells, by attributing it to the human consumers.
The idea being criticised is that—if a few humans dominate the economy by commanding huge armies of robot minions, then—without substantial taxation—the economy will grind to a halt—since hardly any humans are earning any money, and so therefore hardly any humans are spending any money.
The problem with that is that the huge armies of robot minions are consuming vast quantities of material while competing with each other for resources—and the purchase of all those goods is not being accounted for anywhere in the model—apparently because of the ideas that only humans are consumers and most humans are unemployed .
It seems like a fairly straightforwards modelling mistake to me. The purchase of robot fuel and supplies has GOT to be accounted for. Account for it as mega-spending by the human managing director if you really must—but account for it somewhere. As soon as you do that, the whole idea that increassed automation leads to financial meltdown vanishes like a mirage.
We already have a pretty clear idea about the effect of automation on the economy—from Japan and South Korea. The machines do a load of work, and their bodies need feeding—creating demand for raw materials and fuel—and the economy is boosted.
How does needing raw materials create employment for the rest of the population? If everything is mechanized, then raw materials come from those who own mines/wells, and the extraction is done by robot labor. That doesn’t involve very many people.
It doesn’t create employment for the rest of the humans. In this scenario, most humans are unemployed—and probably rather poor—due to the hypothesised lack of “substantial taxation” and government handouts. The throughput of the economy arises essentially from the efforts of the machines.
There is another take on the word “value”—which defines it to mean that which goal-directed systems want.
That way, you can say things like: “Deep Blue usually values bishops more than knights”.
To me, such usage seems vastly superior to using “values” to refer to something that only humans have.