First throw out the FAI part of this argument; we can consider an FAI just as a tool to help us achieve our goals. Any AI which does not do at least this this is insufficiently friendly (and thus counts as a paperclipper, possibly).
Thus, the actual question is what are our goals? I don’t know about you, but I value understanding and exploration. If you value pleasure, good! Have fun being a wirehead.
It comes down to the fact that a world where everyone is a wirehead is not valued by me or probably by many people. Even though this world would maximize pleasure, it wouldn’t maximize utility of people designing the world (I think this is the util/hedon distinction, but I am not sure). If we don’t value that world, why should we create it, even if we would value after we create it?
The way I see it is that there is a set of preferable reward qualia we can experience (pleasure, wonder, empathy, pride) and a set of triggers attached to them in the human mind (sexual contact, learning, bonding, accomplishing a goal). What this article says is that there is no inherent value in the triggers, just in the rewards. Why rely on plugs when you can short circuit the outlet?
But that is missing an entire field of points: there are certain forms of pleasure that can only be retrieved from the correct association of triggers and rewards. Basking in the glow of wonder from cosmological inquiry and revelation is not the same without an intellect piecing together the context. You can have bliss and love and friendship all bundled up into one sensation, but without the STORY, without a timeline of events and shared experience that make up a relationship, you are missing a key part of that positive experience.
tl;dr: Experiencing pure rewards without relying on triggers is a retarded (or limited) way of experiencing the pleasures of the universe.
Like many others below, your reply assumes that what is valuable is what we value. Yet as far as I can see, this assumption has never defended with arguments in this forum. Moreover, the assumption seems clearly false. A person whose brain was wired differently than most people may value states of horrible agony. Yet the fact that this person valued these states would not constitute a reason for thinking them valuable. Pain is bad because of how it feels, rather than by virtue of the attitudes that people have towards painful states.
Like many others below, your reply assumes that what is valuable is what we value.
Well, by definition. I think what you mean is that there are things that “ought to be” valuable which we do not actually value [enough?]. But what evidence is there that there is any “ought” above our existing goals?
What evidence is there that we should value anything more than what mental states feel like from the inside? That’s what the wirehead would ask. He doesn’t care about goals. Let’s see some evidence that our goals matter.
We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note:
I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
Would making paperclips become valuable if we created a paperclip maximiser?
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
An entity that didn’t care about goals would never do anything at all.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
What is valuable is what we value, because if we didn’t value it, we wouldn’t have invented the word “valuable” to describe it.
By analogy, suppose my favourite colour is red, but I speak a language with no term for “red”. So I invent “xylbiz” to refer to red things; in our language, it is pretty much a synonym for “red”. All objects that are xylbiz are my favourite colour. “By definition” to some degree, since my liking red is the origin of the definition “xylbiz = red”. But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that’s how my mind is built.
It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that “xylbiz” refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness.
Note that if my favourite colour changed somehow, so now I like purple and invent the word “blagg” for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about “xylbiz” (actually, being human, would probably just redefine “xylbiz” to mean purple rather than define a new word).
By the way, the philosopher would probably ask “what evidence is there that we should value what mental states feel like from the inside?”
This is one reason why I don’t like to call myself a utilitarian. Too many cached thoughts/objections associated with that term that just don’t apply to what we are talking about
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
If I were about to fall off a cliff, I would prefer that you satisfy your brain’s desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die.
I suspect most people share those preferences.
That pretty much summarizes my objection to wireheading in the real world.
That said, if we posit a hypothetical world where my wireheading doesn’t have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I’m OK with wireheading.
To be more precise, I share the sentiment that others have expressed that my brain says “Boo!” to wireheading even in that world. But in that world, my brain also says “Boo!” to not wireheading for most the same reasons, so that doesn’t weigh into my decision-making much, and is outweighed by my brain’s “Yay!” to enjoyable experiences.
Said more simply: if nothing I do can matter, then I might as well wirehead.
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
I use my current value system to evaluate possible futures. The current me really doesn’t like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness.
I guess I view wireheading as equivalent to suicide; you’re entering a state in which you’ll no longer affect the rest of the world, and from which you’ll never emerge.
No arguments will work on someone who’s already wireheaded, but for someone who is considering it, hopefully they’ll consider the negative effects on the rest of society. Your friends will miss you, you’ll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction.
If none of that moves you, then perhaps you should wirehead.
Is the social-good argument your true rejection, here?
Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you’ll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so?
Or is the social good argument just a soldier here?
After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn’t do the trick, though—to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I’m an evil person, don’t wish to be evil, but it’s not possible to cease being evil are all true.) I’d still at least want a minecraft version of wireheading and not a drugged out version, I think.
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument—I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)
FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario. I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
...if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
We don’t need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers.
Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That’s what determines the purpose. You might say—well, your own perspective. But that’s the thing, my perspective depends on—other than the time of day and my current hormonal status—the way the question is framed, and which identity level I identify with most at that moment.
Consider in the sense of “what would my wire headed self do”, yes. Similar to Anja’s recent post. However, I’ll never (can’t imagine the circumstances) be in a state of mind where doing so would seem natural to me.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!’.
So what would “really valuing” understanding and exploration entail, exactly?
why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
Because my brain does indeed say “yay!” about stuff, but hacking my brain to constantly say “yay!” isn’t one of the stuff that my brain says “yay!” about.
I think you are missing the point.
First throw out the FAI part of this argument; we can consider an FAI just as a tool to help us achieve our goals. Any AI which does not do at least this this is insufficiently friendly (and thus counts as a paperclipper, possibly).
Thus, the actual question is what are our goals? I don’t know about you, but I value understanding and exploration. If you value pleasure, good! Have fun being a wirehead.
It comes down to the fact that a world where everyone is a wirehead is not valued by me or probably by many people. Even though this world would maximize pleasure, it wouldn’t maximize utility of people designing the world (I think this is the util/hedon distinction, but I am not sure). If we don’t value that world, why should we create it, even if we would value after we create it?
The way I see it is that there is a set of preferable reward qualia we can experience (pleasure, wonder, empathy, pride) and a set of triggers attached to them in the human mind (sexual contact, learning, bonding, accomplishing a goal). What this article says is that there is no inherent value in the triggers, just in the rewards. Why rely on plugs when you can short circuit the outlet?
But that is missing an entire field of points: there are certain forms of pleasure that can only be retrieved from the correct association of triggers and rewards. Basking in the glow of wonder from cosmological inquiry and revelation is not the same without an intellect piecing together the context. You can have bliss and love and friendship all bundled up into one sensation, but without the STORY, without a timeline of events and shared experience that make up a relationship, you are missing a key part of that positive experience.
tl;dr: Experiencing pure rewards without relying on triggers is a retarded (or limited) way of experiencing the pleasures of the universe.
Like many others below, your reply assumes that what is valuable is what we value. Yet as far as I can see, this assumption has never defended with arguments in this forum. Moreover, the assumption seems clearly false. A person whose brain was wired differently than most people may value states of horrible agony. Yet the fact that this person valued these states would not constitute a reason for thinking them valuable. Pain is bad because of how it feels, rather than by virtue of the attitudes that people have towards painful states.
Well, by definition. I think what you mean is that there are things that “ought to be” valuable which we do not actually value [enough?]. But what evidence is there that there is any “ought” above our existing goals?
What evidence is there that we should value anything more than what mental states feel like from the inside? That’s what the wirehead would ask. He doesn’t care about goals. Let’s see some evidence that our goals matter.
What would evidence that our goals matter look like?
Just to be clear, I don’t think you’re disagreeing with me.
We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note: I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
I think I must be misunderstanding you. It’s not so much that I’m saying that our goals are the bedrock, as that there’s no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there’s some basis for what we “ought” to do, but I’m making exactly the same point you are when you say:
I know of no such evidence. We do act in pursuit of goals, and that’s enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it’s not very close at all, and I agree, but I don’t see a path to closer.
So, to recap, we value what we value, and there’s no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about “ought” presume a given goal both can agree on.
To the paperclip maximizer, they would certainly be valuable—ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)
By the way, you can’t say the wirehead doesn’t care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn’t care about goals would never do anything at all.
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
I agree with the rest of your comment, and depending on how you define “goal” with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of “go left when there is a light on the right”, think Braitenberg vehicles minus the evolutionary aspect.
Yes, I thought about that when writing the above, but I figured I’d fall back on the term “entity”. ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
See also
Hard to be original anymore. Which is a good sign!
What is valuable is what we value, because if we didn’t value it, we wouldn’t have invented the word “valuable” to describe it.
By analogy, suppose my favourite colour is red, but I speak a language with no term for “red”. So I invent “xylbiz” to refer to red things; in our language, it is pretty much a synonym for “red”. All objects that are xylbiz are my favourite colour. “By definition” to some degree, since my liking red is the origin of the definition “xylbiz = red”. But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that’s how my mind is built.
It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that “xylbiz” refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness.
Note that if my favourite colour changed somehow, so now I like purple and invent the word “blagg” for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about “xylbiz” (actually, being human, would probably just redefine “xylbiz” to mean purple rather than define a new word).
By the way, the philosopher would probably ask “what evidence is there that we should value what mental states feel like from the inside?”
Agreed.
This is one reason why I don’t like to call myself a utilitarian. Too many cached thoughts/objections associated with that term that just don’t apply to what we are talking about
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
If I were about to fall off a cliff, I would prefer that you satisfy your brain’s desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die.
I suspect most people share those preferences.
That pretty much summarizes my objection to wireheading in the real world.
That said, if we posit a hypothetical world where my wireheading doesn’t have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I’m OK with wireheading.
To be more precise, I share the sentiment that others have expressed that my brain says “Boo!” to wireheading even in that world. But in that world, my brain also says “Boo!” to not wireheading for most the same reasons, so that doesn’t weigh into my decision-making much, and is outweighed by my brain’s “Yay!” to enjoyable experiences.
Said more simply: if nothing I do can matter, then I might as well wirehead.
Because my brain says ‘boo’ about the thought of that.
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
I use my current value system to evaluate possible futures. The current me really doesn’t like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness.
I guess I view wireheading as equivalent to suicide; you’re entering a state in which you’ll no longer affect the rest of the world, and from which you’ll never emerge.
No arguments will work on someone who’s already wireheaded, but for someone who is considering it, hopefully they’ll consider the negative effects on the rest of society. Your friends will miss you, you’ll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction.
If none of that moves you, then perhaps you should wirehead.
Is the social-good argument your true rejection, here?
Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you’ll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so?
Or is the social good argument just a soldier here?
After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn’t do the trick, though—to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I’m an evil person, don’t wish to be evil, but it’s not possible to cease being evil are all true.) I’d still at least want a minecraft version of wireheading and not a drugged out version, I think.
Cool.
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument—I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)
FWIW, I don’t find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario.
I’m not quite sure what “predictions” you have in mind, though.
That was poorly phrased, sorry. I meant it’s difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
Confidence isn’t really the issue, here.
If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as “smart” as I am for some rough-and-ready definition of “smart”.
Yes, for such a world to be at all stable, I have to assume that such tools aren’t full AGIs in the sense LW uses the term—in particular, that they can’t self-improve any better than I can. Maybe that’s really unlikely, but I don’t find that this limits my ability to visualize it for purposes of the thought experiment.
For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me… at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
Agreed. If you’ll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).
It’s primarily the “your friends will be happy for you” bit that I couldn’t imagine, but trying to imagine it made me think of worlds where I was evil.
I mean, I basically have to think of scenarios where it’d really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
Well, you know your friends better than I do, obviously.
That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it’s not difficult to imagine.
That the social aspect is where most of the concern seems to be is interesting.
I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn’t change dramatically by the time wireheading becomes possible, it’d need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they’ve gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.
I know for me personally, I have so few social ties at present that I don’t see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he’d only do that if things got so incredibly bad that humanity looked something like doomed. (Where “doomed” is… pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they’ve just moved to antartica.
I think it’s the permanence that makes most of the difference for me. And the fact that I can’t visit them even in principle, and the fact that they won’t be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
We don’t need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers.
Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That’s what determines the purpose. You might say—well, your own perspective. But that’s the thing, my perspective depends on—other than the time of day and my current hormonal status—the way the question is framed, and which identity level I identify with most at that moment.
Does it follow from that that you could consider taking the perspective of your post wirehead self?
Consider in the sense of “what would my wire headed self do”, yes. Similar to Anja’s recent post. However, I’ll never (can’t imagine the circumstances) be in a state of mind where doing so would seem natural to me.
Yes. But insofar as that’s true, lavalamp’s idea that Raoul589 should wirehead if the social-good argument doesn’t move them is less clear.
So what would “really valuing” understanding and exploration entail, exactly?
Because my brain does indeed say “yay!” about stuff, but hacking my brain to constantly say “yay!” isn’t one of the stuff that my brain says “yay!” about.