I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what’s traditionally been human territory. Imagine, in particular, something like the “em” scenarios Robin Hanson predicts, where there’s no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore’s law, then we soon have—not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone’s material needs are satisfied by the AI—but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems’ fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Given that jacob_cannell is talking about food and housing, I don’t think he has the ems scenario in mind.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I’m pretty skeptical about Hansonian ems too, for what it’s worth.)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there’s a huge market for generalists, but that isn’t the only niche.
Take this Go AI for example—critics like to point out that it can’t drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could ‘employ’ these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.
I take it jacob_cannell has in mind neither a benevolent godlike FAI nor a hostile (or indifferent-but-in-competition) godlike UFAI, in either of which cases all questions of traditional economics are probably off the table, but rather a gradual encroachment of non-godlike AI on what’s traditionally been human territory. Imagine, in particular, something like the “em” scenarios Robin Hanson predicts, where there’s no superduperintelligent AI but lots of human-level AIs, probably the result of brain emulation or something very like it, who can do pretty much any of the jobs currently done by biological humans.
If the cost of running (or being) an emulated human goes down exponentially according to something like Moore’s law, then we soon have—not the classic UFAI scenario where humans are probably extinct or worse, nor the benevolent-AI scenario where everyone’s material needs are satisfied by the AI—but an economy that works rather like the one we have now except that almost any job that needs a human being to do it can be done quicker and cheaper by a simulated human being than by a biological one.
At that point, maybe some biological humans are owners of emulated humans or the hardware they run on, and maybe they can reap some or all the gains of the ems’ fast cheap work. And, if that happens, maybe they will want some other biological humans to do jobs that really do need actual flesh. (Prostitution, perhaps?) Other biological humans are out of luck, though.
Given that jacob_cannell is talking about food and housing, I don’t think he has the ems scenario in mind.
I am not a big fan of ems, anyway—I think this situation as described by Hanson is not stable.
The scenario I think he has in mind is one in which there are both biological humans and ems; he identifies more with the biological humans, and he worries that the biological humans are going to have trouble surviving because they will be outcompeted by the ems.
(I’m pretty skeptical about Hansonian ems too, for what it’s worth.)
I think the Hansonian EM scenario is probably closer to the truth than the others, but it focuses perhaps too much on generalists. The DL explosion will also result in vastly powerful specialists that are still general enough to do complex human jobs, but still are limited or savant like in other respects. Yes, there’s a huge market for generalists, but that isn’t the only niche.
Take this Go AI for example—critics like to point out that it can’t drive a car, but why would you want it to? Car driving is a different niche, which will be handled by networks specifically trained for that niche to superhuman level. A generalist AGI could ‘employ’ these various specialists as needed, perhaps on fast timescales.
Specialization in human knowledge has increased over time, AI will accelerate that trend.