As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
If I were about to fall off a cliff, I would prefer that you satisfy your brain’s desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die.
I suspect most people share those preferences.
That pretty much summarizes my objection to wireheading in the real world.
That said, if we posit a hypothetical world where my wireheading doesn’t have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I’m OK with wireheading.
To be more precise, I share the sentiment that others have expressed that my brain says “Boo!” to wireheading even in that world. But in that world, my brain also says “Boo!” to not wireheading for most the same reasons, so that doesn’t weigh into my decision-making much, and is outweighed by my brain’s “Yay!” to enjoyable experiences.
Said more simply: if nothing I do can matter, then I might as well wirehead.
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
I use my current value system to evaluate possible futures. The current me really doesn’t like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness.
I guess I view wireheading as equivalent to suicide; you’re entering a state in which you’ll no longer affect the rest of the world, and from which you’ll never emerge.
No arguments will work on someone who’s already wireheaded, but for someone who is considering it, hopefully they’ll consider the negative effects on the rest of society. Your friends will miss you, you’ll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction.
If none of that moves you, then perhaps you should wirehead.
Is the social-good argument your true rejection, here?
Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you’ll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so?
Or is the social good argument just a soldier here?
After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn’t do the trick, though—to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I’m an evil person, don’t wish to be evil, but it’s not possible to cease being evil are all true.) I’d still at least want a minecraft version of wireheading and not a drugged out version, I think.
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument—I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)
FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario. I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
...if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
We don’t need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers.
Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That’s what determines the purpose. You might say—well, your own perspective. But that’s the thing, my perspective depends on—other than the time of day and my current hormonal status—the way the question is framed, and which identity level I identify with most at that moment.
Consider in the sense of “what would my wire headed self do”, yes. Similar to Anja’s recent post. However, I’ll never (can’t imagine the circumstances) be in a state of mind where doing so would seem natural to me.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!’.
So what would “really valuing” understanding and exploration entail, exactly?
why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
Because my brain does indeed say “yay!” about stuff, but hacking my brain to constantly say “yay!” isn’t one of the stuff that my brain says “yay!” about.
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
If I were about to fall off a cliff, I would prefer that you satisfy your brain’s desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die.
I suspect most people share those preferences.
That pretty much summarizes my objection to wireheading in the real world.
That said, if we posit a hypothetical world where my wireheading doesn’t have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I’m OK with wireheading.
To be more precise, I share the sentiment that others have expressed that my brain says “Boo!” to wireheading even in that world. But in that world, my brain also says “Boo!” to not wireheading for most the same reasons, so that doesn’t weigh into my decision-making much, and is outweighed by my brain’s “Yay!” to enjoyable experiences.
Said more simply: if nothing I do can matter, then I might as well wirehead.
Because my brain says ‘boo’ about the thought of that.
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
I use my current value system to evaluate possible futures. The current me really doesn’t like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness.
I guess I view wireheading as equivalent to suicide; you’re entering a state in which you’ll no longer affect the rest of the world, and from which you’ll never emerge.
No arguments will work on someone who’s already wireheaded, but for someone who is considering it, hopefully they’ll consider the negative effects on the rest of society. Your friends will miss you, you’ll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction.
If none of that moves you, then perhaps you should wirehead.
Is the social-good argument your true rejection, here?
Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you’ll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so?
Or is the social good argument just a soldier here?
After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn’t do the trick, though—to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I’m an evil person, don’t wish to be evil, but it’s not possible to cease being evil are all true.) I’d still at least want a minecraft version of wireheading and not a drugged out version, I think.
Cool.
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument—I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)
FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario.
I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
Confidence isn’t really the issue, here.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
We don’t need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers.
Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That’s what determines the purpose. You might say—well, your own perspective. But that’s the thing, my perspective depends on—other than the time of day and my current hormonal status—the way the question is framed, and which identity level I identify with most at that moment.
Does it follow from that that you could consider taking the perspective of your post wirehead self?
Consider in the sense of “what would my wire headed self do”, yes. Similar to Anja’s recent post. However, I’ll never (can’t imagine the circumstances) be in a state of mind where doing so would seem natural to me.
Yes. But insofar as that’s true, lavalamp’s idea that Raoul589 should wirehead if the social-good argument doesn’t move them is less clear.
So what would “really valuing” understanding and exploration entail, exactly?
Because my brain does indeed say “yay!” about stuff, but hacking my brain to constantly say “yay!” isn’t one of the stuff that my brain says “yay!” about.