You could accurately describe it as “what Inside Out would have been if it looked inside the mind of an AI rather than a human girl, and if the society of mind had been composed of essentially sociopathic subagents that still came across as surprisingly sympathetic and co-operated with each other due to game theoretic and economic reasons, all the while trying to navigate the demands of human scientists building the AI system”.
Hmm. review scared me a bit, and the home page talking about incredibly nearsighted populist economics is a huge turn-off. Still, probably need to read it.
Is the kindle version different in any way from the free mobi file? I’ll gladly spend $5 for good formatting or easier reading, but would prefer not to pay Amazon if they’re not providing value.
Is the kindle version different in any way from the free mobi file?
Haven’t compared the two, but I would assume no. The formatting on the Kindle version was nothing fancy, just standard.
I think I saw the author commenting that he’d have put it up on Kindle for free as well if it was possible, and there’s no mention of it on the story’s site, so it’s probably not intended as a “deluxe edition”.
It (and your link) treat “employment” as a good. This is ridiculous—employment is simply an opportunity to provide value for someone. Goods and services becoming cheap doesn’t prevent people doing things for each other, it just means different things become important, and a larger set of people (including those who are technically unemployed) get more stuff that’s now near-free to create.
Goods and services becoming cheaper is basically the economists definition of progress, so that’s all good.
a larger set of people (including those who are technically unemployed) get more stuff that’s now near-free to create.
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn’t making food or housing cheaper fast enough, and can’t be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
There is no natural law which ensures that everyone has earnings potential greater than cost of living.
Indeed not, but that correct idea often leads people to the incorrect idea that robotics-induced disemployment, and subsequent impoverishment, are technological inevitabilities. Whether people everybody is going to have enough income to eat depends on how the (increased) wealth of such a society is distributed .. basically to get to the worst-case scenario, you need a sharp decline of interest in wealth redistribution, even compared to US norms. It’s a matter of public policy, not technological inevitability. So it’s not really the robots taking over people should be afraid of, it’s the libertarians taking over.
New tech isn’t making food or housing cheaper fast enough,
I am not sure what that is supposed to mean. There is enough food and living space to go round, globally, but it is not going to everyone who needs its, which is, again, re/distribution problem
Fast enough would be moore’s law—price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable, even as brains in vats.
Look up statistics of what fraction of income did an average American family spend on food a hundred years ago and now.
Second, why don’t you expect it in the future? Biosynthesizing food doesn’t seem to be a huge problem in the context that includes all-powerful AIs...
Biosynthesized food is an extremely inefficient energy conversion mechanism vs say solar power. Even in the ideal case, the human body burns about 100 watts. When AGI becomes more power efficient than that, even magical 100% efficient solar->food isn’t enough for humans to be competitive. When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
A future of all-powerful AIs is the future where digital intelligence becomes more efficient than biological. So the only solution there where humans remain competitive involve uploading.
price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable,
Why so? Human populations do not double every couple of years.
When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
Hold on. We’re not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans. That has nothing to do with competitiveness.
We’re not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans.
I think you are misremembering the context. Here’s the first thing he said on the subject:
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn’t making food or housing cheaper fast enough, and can’t be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
and that is explicitly about the relationship between food cost and earning power in the context of AI.
I was expressing my reservations about the “New tech isn’t making food or housing cheaper fast enough” part.
Of course not everyone has earning potential greater than the cost living. That has always been so. People in this situation subsist on charity (e.g. of their family) or they die.
As to an AI making work force redundant, the question here is what’s happening to the demand part. The situation where an AI says “I don’t need humans, only my needs matter” is your classic UFAI scenario—presumably we’re not talking about that here. So if the AI can satisfy everyone’s material needs (on some scale from basics to luxuries) all by itself, why would people work? And if it’s not going to give (meat) people food and shelter, we’re back to the “don’t need humans” starting point—or humans will run a parallel economy.
I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what’s traditionally been human territory. Imagine, in particular, something like the “em” scenarios Robin Hanson predicts, where there’s no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore’s law, then we soon have—not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone’s material needs are satisfied by the AI—but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems’ fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Given that jacob_cannell is talking about food and housing, I don’t think he has the ems scenario in mind.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I’m pretty skeptical about Hansonian ems too, for what it’s worth.)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there’s a huge market for generalists, but that isn’t the only niche.
Take this Go AI for example—critics like to point out that it can’t drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could ‘employ’ these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.
So if the AI can satisfy everyone’s material needs (on some scale from basics to luxuries) all by itself, why would people work?
If people own the advanced robots or AIs that are responsible for most production, why would they be impoverished by them? More to the point, why would they want the majority of people who don’t own automated factories to be impoverished, since that means they would have no-one to sell to? There’s no law of economics saying that ina a wealthy society most people would starve, rather to keep an economy going in anything like its present form, you have to have redistribution. In such a future, tycoons would be pushing for basic income -- it’s in their own interests.
If you haven’t heard of it yet, I recommend the novel Crystal Society (freely available here, also $5 Kindle version.)
You could accurately describe it as “what Inside Out would have been if it looked inside the mind of an AI rather than a human girl, and if the society of mind had been composed of essentially sociopathic subagents that still came across as surprisingly sympathetic and co-operated with each other due to game theoretic and economic reasons, all the while trying to navigate the demands of human scientists building the AI system”.
Brienne also had a good review of it here.
Hmm. review scared me a bit, and the home page talking about incredibly nearsighted populist economics is a huge turn-off. Still, probably need to read it.
Is the kindle version different in any way from the free mobi file? I’ll gladly spend $5 for good formatting or easier reading, but would prefer not to pay Amazon if they’re not providing value.
Haven’t compared the two, but I would assume no. The formatting on the Kindle version was nothing fancy, just standard.
I think I saw the author commenting that he’d have put it up on Kindle for free as well if it was possible, and there’s no mention of it on the story’s site, so it’s probably not intended as a “deluxe edition”.
What’s wrong with the economics on the home page? It seems fairly straightforward and likely. Mass technological unemployment seems at least plausible enough to be raised to attention. (Also.)
It (and your link) treat “employment” as a good. This is ridiculous—employment is simply an opportunity to provide value for someone. Goods and services becoming cheap doesn’t prevent people doing things for each other, it just means different things become important, and a larger set of people (including those who are technically unemployed) get more stuff that’s now near-free to create.
Goods and services becoming cheaper is basically the economists definition of progress, so that’s all good.
There is no natural law which ensures that everyone has earnings potential greater than cost of living. New tech isn’t making food or housing cheaper fast enough, and can’t be expected to in the future. AI could suddenly make most of the work force redundant without making housing or food free.
Indeed not, but that correct idea often leads people to the incorrect idea that robotics-induced disemployment, and subsequent impoverishment, are technological inevitabilities. Whether people everybody is going to have enough income to eat depends on how the (increased) wealth of such a society is distributed .. basically to get to the worst-case scenario, you need a sharp decline of interest in wealth redistribution, even compared to US norms. It’s a matter of public policy, not technological inevitability. So it’s not really the robots taking over people should be afraid of, it’s the libertarians taking over.
I am not sure what that is supposed to mean. There is enough food and living space to go round, globally, but it is not going to everyone who needs its, which is, again, re/distribution problem
First, what’s “fast enough”? Look up statistics of what fraction of income did an average American family spend on food a hundred years ago and now.
Second, why don’t you expect it in the future? Biosynthesizing food doesn’t seem to be a huge problem in the context that includes all-powerful AIs...
Fast enough would be moore’s law—price of food falling by 2x every couple of years. Anything less than this could lead to biological humans becoming economically unviable, even as brains in vats.
Like this?
Biosynthesized food is an extremely inefficient energy conversion mechanism vs say solar power. Even in the ideal case, the human body burns about 100 watts. When AGI becomes more power efficient than that, even magical 100% efficient solar->food isn’t enough for humans to be competitive. When AGI requires less than 10 watts, even human brains in vatts become uncompetitive.
A future of all-powerful AIs is the future where digital intelligence becomes more efficient than biological. So the only solution there where humans remain competitive involve uploading.
Why so? Human populations do not double every couple of years.
Hold on. We’re not talking about competition between computers and humans. You said that in the future there will not be enough food for all (biological) humans. That has nothing to do with competitiveness.
I think you are misremembering the context. Here’s the first thing he said on the subject:
and that is explicitly about the relationship between food cost and earning power in the context of AI.
I was expressing my reservations about the “New tech isn’t making food or housing cheaper fast enough” part.
Of course not everyone has earning potential greater than the cost living. That has always been so. People in this situation subsist on charity (e.g. of their family) or they die.
As to an AI making work force redundant, the question here is what’s happening to the demand part. The situation where an AI says “I don’t need humans, only my needs matter” is your classic UFAI scenario—presumably we’re not talking about that here. So if the AI can satisfy everyone’s material needs (on some scale from basics to luxuries) all by itself, why would people work? And if it’s not going to give (meat) people food and shelter, we’re back to the “don’t need humans” starting point—or humans will run a parallel economy.
I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what’s traditionally been human territory. Imagine, in particular, something like the “em” scenarios Robin Hanson predicts, where there’s no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore’s law, then we soon have—not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone’s material needs are satisfied by the AI—but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems’ fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Given that jacob_cannell is talking about food and housing, I don’t think he has the ems scenario in mind.
I am not a big fan of ems, anyway—I think this situation as described by Hanson is not stable.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I’m pretty skeptical about Hansonian ems too, for what it’s worth.)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there’s a huge market for generalists, but that isn’t the only niche.
Take this Go AI for example—critics like to point out that it can’t drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could ‘employ’ these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.
If people own the advanced robots or AIs that are responsible for most production, why would they be impoverished by them? More to the point, why would they want the majority of people who don’t own automated factories to be impoverished, since that means they would have no-one to sell to? There’s no law of economics saying that ina a wealthy society most people would starve, rather to keep an economy going in anything like its present form, you have to have redistribution. In such a future, tycoons would be pushing for basic income -- it’s in their own interests.