Stimulation of the pleasure center is a substitute measure for genetic fitness and neurochemicals are a substitute measure for happiness.
This is true from the point of view of natural selection. It is significantly different from what actual people say and feel they want, consciously try to optimize, or end up optimizing (for most people most of the time). Actually maximizing inclusive genetic fitness (IGF) would mostly interfere with people’s happiness.
If wireheading means anything deviating from IGF, then I favor (certain kinds of) wireheading and oppose IGF, and so do most people. Avoiding wireheading, in that sense, is not something I want.
Looking at your examples of wireheading:
Stimulation of the brain via electrodes.
Bad because the rats die of hunger (and by analogy literally wireheaded humans might run a similar risk). Also bad because it subverts the mechanism for choosing and judging outcomes, through superstimulation it wasn’t designed for.
Humans on drugs.
In Brave New World, the use of soma shortened lifespans; that still was a reasonable tradeoff. If a drug like soma really existed and had no downsides—addiction, cost, side effects—then of course it would be a good thing.
It’s like saying, what if we discover a new activity that’s more fun than sex in all ways, and give up sex to free up time? Your answer seems to be that that would be sad because sex is part of our “true” utility function. But that contradicts the stipulation that it’s more fun than sex.
The experience machine.
Bad because you’re giving up on the chance to improve life expectancy in the real world, and reducing the total number of future people, possibly from infinity to a finite number (though not everyone cares about that). But if those concerns didn’t exist—if we were post-singularity, and discovered that the most enjoyable life was in a simulation that was different from the real universe—then why not take that option?
An AGI resetting its utility function.
That may not be what you want the AGI to do, but it’s clearly what it wants for itself. In the case of humans there’s no creator whose wishes we need to consider, so I see no reason not to wirehead on this score. If I could modify my utility function—or rather, my reward function—I would make a lot of changes.
This is true from the point of view of natural selection. It is significantly different from what actual people say and feel they want, consciously try to optimize, or end up optimizing (for most people most of the time). Actually maximizing inclusive genetic fitness (IGF) would mostly interfere with people’s happiness.
If wireheading means anything deviating from IGF, then I favor (certain kinds of) wireheading and oppose IGF, and so do most people. Avoiding wireheading, in that sense, is not something I want.
Looking at your examples of wireheading:
Bad because the rats die of hunger (and by analogy literally wireheaded humans might run a similar risk). Also bad because it subverts the mechanism for choosing and judging outcomes, through superstimulation it wasn’t designed for.
In Brave New World, the use of soma shortened lifespans; that still was a reasonable tradeoff. If a drug like soma really existed and had no downsides—addiction, cost, side effects—then of course it would be a good thing.
It’s like saying, what if we discover a new activity that’s more fun than sex in all ways, and give up sex to free up time? Your answer seems to be that that would be sad because sex is part of our “true” utility function. But that contradicts the stipulation that it’s more fun than sex.
Bad because you’re giving up on the chance to improve life expectancy in the real world, and reducing the total number of future people, possibly from infinity to a finite number (though not everyone cares about that). But if those concerns didn’t exist—if we were post-singularity, and discovered that the most enjoyable life was in a simulation that was different from the real universe—then why not take that option?
That may not be what you want the AGI to do, but it’s clearly what it wants for itself. In the case of humans there’s no creator whose wishes we need to consider, so I see no reason not to wirehead on this score. If I could modify my utility function—or rather, my reward function—I would make a lot of changes.