I see wireheading as another problem that is the result of utility maximization. The question is, can utility be objectively grounded for an agent? If that is possible, wireheading might be objectively rational for a human utility maximizer.
Consider what it would mean if utility was ultimately measured in some unit of bodily sensations. We do what we do for what it does with us (our body (brain)). We do what we do because it makes us feel good, and bad if we don’t do it. It would be rational to fake the realization of our goals, to receive the good feelings with less effort and risk, if we were equally happy doing so and the utility that is the result of wireheading would outweigh the expected utility of actually realizing our goals.
But that doesn’t seem to be the case. Either humans are not utility maximizer or utility is not objectively grounded, i.e. humans assign arbitrary amounts of utility to arbitrary decisions and world states.
Why do people sacrifice their own life’s to save their loved ones? From the point of view of a utility maximizer with complex values this is hard to justify. After all it is unlikely that the utility you would be able to receive from the limited time of saving your loved one’s can outweigh the utility of the rest of your life without your loved one’s. Even if you think you could not live without someone, there are therapies. Such decisions only seem to make sense if humans are either not utility maximizer’s or are able to assign an infinite amount of utility to whatever they want to do based on naive introspection.
For example, I want to learn and understand as much as possible about the nature of reality. A way to objectively ground this desire, and thereby measure its expected utility, is by the amount of positive bodily sensations I receive from pursuing that goal. But such a payoff could probably be realized artificially (simulated) much more efficiently and with less risks (don’t have to create that high-energy particle accelerator for real). So if what I really want is to be happy, to feel good, then I should probably choose the simulated revelation over the real truth. But I would never do that, even if I knew that I would have to suffer quite often as a result of my desire to learn about the universe and how it works. I would even accept its destruction over the choice of being wireheaded to believe that I figured it all out.
But that doesn’t seem to be the case. [...] Why do people sacrifice their own life’s to save their loved ones? From the point of view of a utility maximizer this is hard to justify. After all it is unlikely that the utility you would be able to receive from the limited time of saving your loved one’s can outweigh the utility of the rest of your life without your loved one.
I agree that if humans made decisions based on utility calculations that aren’t grounded in direct sensations, then that’d be a good argument against wireheading.
I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn’t look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce.
Whenever I encounter a thought of the sort “I value X, objectively”, I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn’t actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can’t manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I’m not sure I’m actually missing something.
I see wireheading as another problem that is the result of utility maximization. The question is, can utility be objectively grounded for an agent? If that is possible, wireheading might be objectively rational for a human utility maximizer.
Consider what it would mean if utility was ultimately measured in some unit of bodily sensations. We do what we do for what it does with us (our body (brain)). We do what we do because it makes us feel good, and bad if we don’t do it. It would be rational to fake the realization of our goals, to receive the good feelings with less effort and risk, if we were equally happy doing so and the utility that is the result of wireheading would outweigh the expected utility of actually realizing our goals.
But that doesn’t seem to be the case. Either humans are not utility maximizer or utility is not objectively grounded, i.e. humans assign arbitrary amounts of utility to arbitrary decisions and world states.
Why do people sacrifice their own life’s to save their loved ones? From the point of view of a utility maximizer with complex values this is hard to justify. After all it is unlikely that the utility you would be able to receive from the limited time of saving your loved one’s can outweigh the utility of the rest of your life without your loved one’s. Even if you think you could not live without someone, there are therapies. Such decisions only seem to make sense if humans are either not utility maximizer’s or are able to assign an infinite amount of utility to whatever they want to do based on naive introspection.
For example, I want to learn and understand as much as possible about the nature of reality. A way to objectively ground this desire, and thereby measure its expected utility, is by the amount of positive bodily sensations I receive from pursuing that goal. But such a payoff could probably be realized artificially (simulated) much more efficiently and with less risks (don’t have to create that high-energy particle accelerator for real). So if what I really want is to be happy, to feel good, then I should probably choose the simulated revelation over the real truth. But I would never do that, even if I knew that I would have to suffer quite often as a result of my desire to learn about the universe and how it works. I would even accept its destruction over the choice of being wireheaded to believe that I figured it all out.
I agree that if humans made decisions based on utility calculations that aren’t grounded in direct sensations, then that’d be a good argument against wireheading.
I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn’t look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce.
Whenever I encounter a thought of the sort “I value X, objectively”, I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn’t actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can’t manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I’m not sure I’m actually missing something.