But that doesn’t seem to be the case. [...] Why do people sacrifice their own life’s to save their loved ones? From the point of view of a utility maximizer this is hard to justify. After all it is unlikely that the utility you would be able to receive from the limited time of saving your loved one’s can outweigh the utility of the rest of your life without your loved one.
I agree that if humans made decisions based on utility calculations that aren’t grounded in direct sensations, then that’d be a good argument against wireheading.
I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn’t look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce.
Whenever I encounter a thought of the sort “I value X, objectively”, I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn’t actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can’t manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I’m not sure I’m actually missing something.
I agree that if humans made decisions based on utility calculations that aren’t grounded in direct sensations, then that’d be a good argument against wireheading.
I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn’t look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce.
Whenever I encounter a thought of the sort “I value X, objectively”, I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn’t actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can’t manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I’m not sure I’m actually missing something.