You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave’s speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good—since in his scenario it is clearly net better to wirehead than not to.
Raoul589
It seems, then, that anti-wireheading boils down to the claim that ‘wireheading, boo!’.
This is not a convincing argument to people whose brains don’t say to them ‘wireheading, boo!’. My impression was that denisbider’s top level post was a call for an anti-wireheading argument more convincing than this.
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.
I simply don’t believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you ‘yay, understanding and exploration!‘. What’s more, the only way you even know this much, is from how you feel about exploration—on the inside—when you are considering it or engaging in it. That is, how much ‘pleasure’ or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: ‘so, what do you think about making scientific discoveries?’ and it says right back to you: ‘making discoveries? Yay!’
Since literally every single thing we value just boils down to ‘my brain says yay about this’ anyway, why don’t we just hack the brain equivalent to say ‘yay!’ as much as possible?
I think that you are right that we don’t disagree on the ‘basis of morality’ issue. My claim is only that which you said above: there is no objective bedrock for morality, and there’s no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
We disagree if you intended to make the claim that ‘our goals’ are the bedrock on which we should base the notion of ‘ought’, since we can take the moral skepticism a step further, and ask: what evidence is there that there is any ‘ought’ above ‘maxing out our utility functions’?
A further point of clarification: It doesn’t follow—by definition, as you say—that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that ‘what is valuable is what we value’ tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.
Note: I took the statement ‘what is valuable is what we value’ to be equivalent to ‘things are valuable because we value them’. The statement has another possbile meaning: ‘we value things because they are valuable’. I think both are incorrect for the same reason.
What evidence is there that we should value anything more than what mental states feel like from the inside? That’s what the wirehead would ask. He doesn’t care about goals. Let’s see some evidence that our goals matter.
‘I don’t want that’ doesn’t imply ‘we don’t want that’. In fact, if the ‘we’ refers to humanity as a whole, then denisbider’s position refutes the claim by definition.
Even if I could have selected the links I wouldn’t have tried it, because you just know that clicking on something like that will open a new page and delete all of your entered data.
I just took the survey, making this my first post that someone will read!
For what it’s worth, I’m probably going to be in Auckland early next year, and I would come to the meetup.
Does it follow from that that you could consider taking the perspective of your post wirehead self?