I believe that humans have natural psychological defenses against the lure of wireheading, because the appeal is something we navigate on a daily basis in our every day lives. In my case, I know I would really enjoy entertaining myself all the time (watching movies, eating good food, reading books) but eventually I would run out of money or feel guilty I’m not accomplishing anything.
Even if you tell people there will be no long-term consequences to wire-heading, they don’t believe you. It’s a matter of good character, actually, to be resistant to wanting to wirehead. For example, when people signal that they wouldn’t wirehead because they prefer real interaction with external reality, there is reward (social and self-generated). (When I decide that I would ‘never wirehead’, I feel a sense of security and well-being.) I don’t know why the intuitions don’t work on you, perhaps you have a different set of background experiences so that you bypass the ‘if-I-don’t-stay-aware-now-I’ll-lose-later’ associations.
It seems to me that those who insist that they wouldn’t wirehead, besides reaping social rewards for signaling a willingness to be altruistic, haven’t fully taken to heart that values are not externally validated. If you have complex values x and y, they might as well be simple ones s and t. I think there are real (and healthy) physical/biological barriers to realizing this, so that even intellectuals won’t become psychopaths.
But I agree with you it is how you phrase the question, and there are intuition pumps that pump the other way:
Futurists imagine a utopian future. But even if we achieved such, it wouldn’t change the past. Why should the future time be so elevated in importance? No, any perfect future would be marred by the fact that for thousands of years in the past there was human suffering.
If the future can’t be perfect due to the past, perhaps instead we could create a perfect sub-universe, where everything is perfect from beginning to end. Even if the outer universe can’t be utopian, it can simulate utopias.
Then you might realize that by wireheading, you are simulating a utopian universe and thereby doing your part (one consciousness at a time) to create utopias. Then the only moral reason not to wirehead is if you think you can have enough influence to create more subjective happiness by not wireheadong than with your consciousness wireheaded alone.
Sometimes, my thoughts bend solopsistic (or at least simulation-based) and I wonder if the universe I’m in is already optimised. I’m skeptical because I hear there is suffering, but perhaps that is some necessary negativity to optimize my existence (part and parcel with having ‘purpose’). I think I must actually believe this because stories of real suffering cause me intense disillusionment, as I don’t expect it to be real and when I try to imagine it being real there is this strong resistance. I observe that many people seem to feel this way, if they’re not in outright denial all the time about suffering being ‘real’.
I was thinking earlier today—I’ll just throw this in—that my intuition about values tends to be a little different than often described here because I feel that 1 consciousness with no suffering would be better than 10 consciousness with a mixed bag of experience. The only reason 10 consciousnesses might be better than 1 is because of their enjoyment of one another. So I guess I think quality is better than quantity, and also, I don’t distinguish among consciousnesses. 10 moments of consciousness can be distributed over 10 people or 1 person, it doesn’t matter. So if there really was a wire-heading machine, I might be convinced that not stepping in is equivalent to causing suffering to another person, equal to all the relative suffering I would encounter by not wireheading. However, I’m glad such choices are not available because—for some reason, as I said I think it is biological—wireheading just feels like a terrible negative choice, like entering a coffin. It feels isolated and not real.
I think there are real (and healthy) physical/biological barriers to realizing this, so that even intellectuals won’t become psychopaths.
If it’s healthy to not be a psychopath, on what values do you base that? I think you’re sneaking in a value judgment here that, if valid, would rule out wireheading.
(It might be evolutionary successful to not be a (full) psychopath, but that’s a very different matter.)
I do find your overall thought process in your first few paragraphs plausible, but “anyone who disagrees with me is just not admitting that I’m right” sounds way too much like the kind of toxic reasoning I’m trying to avoid, so I’m fairly skeptical of it.
I do find your overall thought process in your first few paragraphs plausible, but “anyone who disagrees with me is just not admitting that I’m right” sounds way too much like the kind of toxic reasoning I’m trying to avoid, so I’m fairly skeptical of it.
Just in case, I don’t argue that people who say they don’t want to wirehead are wrong about that. I think it’s ultimately inconsistent with a full appreciation that values are not externally validated. I think this full appreciation is prevent by biological stop-guards.
Equivalence of Wire-Heading and Modifying Values As Giving Up On External Satisfaction of our Current Values
Something I think about in relation to wireheading, so close together in my brain that when talking about one I find myself conflating with the other, is that it should follow that if values aren’t externally validated, it should be equivalent to ‘make the world better’ by (a) changing the world to fit our values or by (b) changing our values to fit the world. We have a strong preference for the former, but we could modify this preference so (b) would seem just as good a solution. So by modifying their value about solutions (a) and (b), a person in theory could then self-modify to be perfectly happy with the universe as it is. This is equivalent to wireheading, because in both cases you have a perfectly happy person without altering the universe outside their brain.
What I think people don’t admit.
I think what ‘anyone who disagrees with you is not admitting’ is that the universe in which your values are altered to match reality (or in which a person chooses to wirehead) is just as good as any other universe.
Well, maybe they do admit it, but then their arational preference for their current values is not unlike a preference for wireheading.
The goodness of the universe is subjective, and for any subjective observer, the universe is good if it satisfies their preferences. Thus, a universe in which our values are modified to match the universe is just as good as our values modified to match the universe. I think that is clear.
However, people who don’t want to wirehead compare the universe (b) (one in which their values are modified but the universe is not) with the universe they currently prefer—I guess as they are supposed to—and decide that universe (b) is not as good—relative of course to their current set of values.
But I don’t understand their preference for their original set of preferences if they know these preferences aren’t actually, really, externally better. This is the contradiction I find: they insist that external reality is what matters to them, rather than happiness through wireheading. But preferring to want to prefer a set of values that have no external significance is preferring to live in a wire-headed universe, in the sense that the values of these preferences are just in their head after all.
One difference I suppose is that with respect to our preferences we’re wired by biology, which for now is hard-wired, whereas choosing to wirehead for happiness would be a choice. If we want to minimize the extent that we’re wired, we’d stick to a minimum set. In which case, as soon as we have the choice to shake off the yoke of biological preferences, we should self-modify ourselves into blissfully happy rocks (an inert object entirely satisfied with the way the universe currently is).
That’s exactly how it appears to me, though I’m not confident this is correct. It seems like others should’ve thought of the same thing, but then they shouldn’t disagree, which they do. So either this is far less convincing than I think (maybe these safeguards don’t work in my case) or it’s wrong. Dunno right now.
By ‘healthy’, I did mean evolutionarily successful. However, I wouldn’t go to great lengths to defend the statement, so I think you did catch me saying something I didn’t entirely mean.
Someone can be intellectual and emotionally detached at times, and this can help someone make more rational choices. However, if someone is too emotionally detached they don’t empathize with other people (or even themselves) and don’t care about their goals. So I meant something more general like apathy than lack of empathy. So my claim is that biological stop-guards prevent us from being too apathetic about external reality. (For example, when I imagine wireheading, I start empathizing with all the people I’m abandoning. In general, a person should feel the tug of all their unmet values and goals.)
I believe that humans have natural psychological defenses against the lure of wireheading, because the appeal is something we navigate on a daily basis in our every day lives. In my case, I know I would really enjoy entertaining myself all the time (watching movies, eating good food, reading books) but eventually I would run out of money or feel guilty I’m not accomplishing anything.
Even if you tell people there will be no long-term consequences to wire-heading, they don’t believe you. It’s a matter of good character, actually, to be resistant to wanting to wirehead. For example, when people signal that they wouldn’t wirehead because they prefer real interaction with external reality, there is reward (social and self-generated). (When I decide that I would ‘never wirehead’, I feel a sense of security and well-being.) I don’t know why the intuitions don’t work on you, perhaps you have a different set of background experiences so that you bypass the ‘if-I-don’t-stay-aware-now-I’ll-lose-later’ associations.
It seems to me that those who insist that they wouldn’t wirehead, besides reaping social rewards for signaling a willingness to be altruistic, haven’t fully taken to heart that values are not externally validated. If you have complex values x and y, they might as well be simple ones s and t. I think there are real (and healthy) physical/biological barriers to realizing this, so that even intellectuals won’t become psychopaths.
But I agree with you it is how you phrase the question, and there are intuition pumps that pump the other way:
Futurists imagine a utopian future. But even if we achieved such, it wouldn’t change the past. Why should the future time be so elevated in importance? No, any perfect future would be marred by the fact that for thousands of years in the past there was human suffering.
If the future can’t be perfect due to the past, perhaps instead we could create a perfect sub-universe, where everything is perfect from beginning to end. Even if the outer universe can’t be utopian, it can simulate utopias.
Then you might realize that by wireheading, you are simulating a utopian universe and thereby doing your part (one consciousness at a time) to create utopias. Then the only moral reason not to wirehead is if you think you can have enough influence to create more subjective happiness by not wireheadong than with your consciousness wireheaded alone.
Sometimes, my thoughts bend solopsistic (or at least simulation-based) and I wonder if the universe I’m in is already optimised. I’m skeptical because I hear there is suffering, but perhaps that is some necessary negativity to optimize my existence (part and parcel with having ‘purpose’). I think I must actually believe this because stories of real suffering cause me intense disillusionment, as I don’t expect it to be real and when I try to imagine it being real there is this strong resistance. I observe that many people seem to feel this way, if they’re not in outright denial all the time about suffering being ‘real’.
I was thinking earlier today—I’ll just throw this in—that my intuition about values tends to be a little different than often described here because I feel that 1 consciousness with no suffering would be better than 10 consciousness with a mixed bag of experience. The only reason 10 consciousnesses might be better than 1 is because of their enjoyment of one another. So I guess I think quality is better than quantity, and also, I don’t distinguish among consciousnesses. 10 moments of consciousness can be distributed over 10 people or 1 person, it doesn’t matter. So if there really was a wire-heading machine, I might be convinced that not stepping in is equivalent to causing suffering to another person, equal to all the relative suffering I would encounter by not wireheading. However, I’m glad such choices are not available because—for some reason, as I said I think it is biological—wireheading just feels like a terrible negative choice, like entering a coffin. It feels isolated and not real.
If it’s healthy to not be a psychopath, on what values do you base that? I think you’re sneaking in a value judgment here that, if valid, would rule out wireheading.
(It might be evolutionary successful to not be a (full) psychopath, but that’s a very different matter.)
I do find your overall thought process in your first few paragraphs plausible, but “anyone who disagrees with me is just not admitting that I’m right” sounds way too much like the kind of toxic reasoning I’m trying to avoid, so I’m fairly skeptical of it.
Just in case, I don’t argue that people who say they don’t want to wirehead are wrong about that. I think it’s ultimately inconsistent with a full appreciation that values are not externally validated. I think this full appreciation is prevent by biological stop-guards.
Equivalence of Wire-Heading and Modifying Values As Giving Up On External Satisfaction of our Current Values
Something I think about in relation to wireheading, so close together in my brain that when talking about one I find myself conflating with the other, is that it should follow that if values aren’t externally validated, it should be equivalent to ‘make the world better’ by (a) changing the world to fit our values or by (b) changing our values to fit the world. We have a strong preference for the former, but we could modify this preference so (b) would seem just as good a solution. So by modifying their value about solutions (a) and (b), a person in theory could then self-modify to be perfectly happy with the universe as it is. This is equivalent to wireheading, because in both cases you have a perfectly happy person without altering the universe outside their brain.
What I think people don’t admit.
I think what ‘anyone who disagrees with you is not admitting’ is that the universe in which your values are altered to match reality (or in which a person chooses to wirehead) is just as good as any other universe.
Well, maybe they do admit it, but then their arational preference for their current values is not unlike a preference for wireheading.
The goodness of the universe is subjective, and for any subjective observer, the universe is good if it satisfies their preferences. Thus, a universe in which our values are modified to match the universe is just as good as our values modified to match the universe. I think that is clear.
However, people who don’t want to wirehead compare the universe (b) (one in which their values are modified but the universe is not) with the universe they currently prefer—I guess as they are supposed to—and decide that universe (b) is not as good—relative of course to their current set of values.
But I don’t understand their preference for their original set of preferences if they know these preferences aren’t actually, really, externally better. This is the contradiction I find: they insist that external reality is what matters to them, rather than happiness through wireheading. But preferring to want to prefer a set of values that have no external significance is preferring to live in a wire-headed universe, in the sense that the values of these preferences are just in their head after all.
One difference I suppose is that with respect to our preferences we’re wired by biology, which for now is hard-wired, whereas choosing to wirehead for happiness would be a choice. If we want to minimize the extent that we’re wired, we’d stick to a minimum set. In which case, as soon as we have the choice to shake off the yoke of biological preferences, we should self-modify ourselves into blissfully happy rocks (an inert object entirely satisfied with the way the universe currently is).
Yup, full agreement.
That’s exactly how it appears to me, though I’m not confident this is correct. It seems like others should’ve thought of the same thing, but then they shouldn’t disagree, which they do. So either this is far less convincing than I think (maybe these safeguards don’t work in my case) or it’s wrong. Dunno right now.
By ‘healthy’, I did mean evolutionarily successful. However, I wouldn’t go to great lengths to defend the statement, so I think you did catch me saying something I didn’t entirely mean.
Someone can be intellectual and emotionally detached at times, and this can help someone make more rational choices. However, if someone is too emotionally detached they don’t empathize with other people (or even themselves) and don’t care about their goals. So I meant something more general like apathy than lack of empathy. So my claim is that biological stop-guards prevent us from being too apathetic about external reality. (For example, when I imagine wireheading, I start empathizing with all the people I’m abandoning. In general, a person should feel the tug of all their unmet values and goals.)
Ok, then I misunderstood you and we do in fact agree.