ISTM that the major flaw in Hanson’s logic is the assumption that uploads won’t replace themselves with simpler nonsentients based on their expertise. The real evolutionary pressure wouldn’t be to have optimum levels of pain and pleasure, but to replace motivation with automation: it takes less power, computing time, and storage space.
The issue is the time period being considered. I don’t claim to analyze an asymptotic future after all tech change has stopped. I instead try to consider the “next” era after foraging, farming, industry. While that era might be short on a cosmic timescale, it may be long subjectively to the creatures involved.
At the moment human minds are vastly more productive than automation. Automation is slowly getting more capable yes, but with ems, they will also increase in efficiency.
That’s an explicit assumption he makes: that even the future ems will fail to design AIs or highly-modifed or nonhuman ems that will outcompete regular human ems. This seems to me very unlikely, but it’s the premise of the discussion, as you correctly note.
Edit: didn’t mean to imply Hanson makes this assumption without arguing for it and justifying it. I’m pretty sure he’s posted about it.
ISTM that the major flaw in Hanson’s logic is the assumption that uploads won’t replace themselves with simpler nonsentients based on their expertise. The real evolutionary pressure wouldn’t be to have optimum levels of pain and pleasure, but to replace motivation with automation: it takes less power, computing time, and storage space.
The issue is the time period being considered. I don’t claim to analyze an asymptotic future after all tech change has stopped. I instead try to consider the “next” era after foraging, farming, industry. While that era might be short on a cosmic timescale, it may be long subjectively to the creatures involved.
At the moment human minds are vastly more productive than automation. Automation is slowly getting more capable yes, but with ems, they will also increase in efficiency.
At what? Tasks involving perceptual control? Social interaction?
That’s an explicit assumption he makes: that even the future ems will fail to design AIs or highly-modifed or nonhuman ems that will outcompete regular human ems. This seems to me very unlikely, but it’s the premise of the discussion, as you correctly note.
Edit: didn’t mean to imply Hanson makes this assumption without arguing for it and justifying it. I’m pretty sure he’s posted about it.