I’ve been thinking about torture and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can’t understand these claims at all.
To clarify, I mean torture in the strict “collapsing into painium” sense. A successful implementation would identify all the punishment circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved box jellyfish. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use “punishment” as short-hand for any unpleasant feeling, as “pain” tends to be used for a specific one of them, among despair, fear and so on, and “it’s not about feeling X, but X and Y” is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn’t find other (not blatantly incoherent) arguments.)
People do not always optimize their actions based on avoiding punishments. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
It is possible to hate doing something while wanting to continue or vice versa, do something despite hating it while wanting to continue. (Seriously? I can’t remember ever doing either. What makes you think that the action is thus valid, and you aren’t just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
A wireheaded “me” wouldn’t be “me” anymore. (What’s this “self” you’re talking about? Why does it matter that it’s preserved?)
“I don’t want it and that’s that.” (Why? What’s this “wanting” you do? How do you know what you “want”? (see end of post))
People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
If we imagine a wireheaded future, it feels like something is missing or like we won’t really be sad. (Intuition pump.)
It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)
(There have also been technical arguments against specific implementations of wireheading. I’m not concerned with those, as long as they don’t show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don’t even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don’t see. I fear that I have a large inferential gap here, so please be explicit and assume I’m a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about “valueing something” sounds like someone talking about “feeling the presence of the Holy Ghost”. I don’t mean this in a derogatory way, but the pattern “sense something funny, therefore some very specific and otherwise unsupported claim” matches. How do you know it’s not just, you know, indigestion?
What is this “valuing”? How do you know that something is a “value”, terminal or not? How do you know what it’s about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these “values” come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it’s all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it’s equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I “wanted” something, but getting it didn’t trigger a rewarding feeling, I wouldn’t take that as evidence that I “value” the activity for its own sake. I’d assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?
I’ve been thinking about torture and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can’t understand these claims at all.
To clarify, I mean torture in the strict “collapsing into painium” sense. A successful implementation would identify all the punishment circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved box jellyfish. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use “punishment” as short-hand for any unpleasant feeling, as “pain” tends to be used for a specific one of them, among despair, fear and so on, and “it’s not about feeling X, but X and Y” is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn’t find other (not blatantly incoherent) arguments.)
People do not always optimize their actions based on avoiding punishments. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
It is possible to hate doing something while wanting to continue or vice versa, do something despite hating it while wanting to continue. (Seriously? I can’t remember ever doing either. What makes you think that the action is thus valid, and you aren’t just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
A wireheaded “me” wouldn’t be “me” anymore. (What’s this “self” you’re talking about? Why does it matter that it’s preserved?)
“I don’t want it and that’s that.” (Why? What’s this “wanting” you do? How do you know what you “want”? (see end of post))
People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
If we imagine a wireheaded future, it feels like something is missing or like we won’t really be sad. (Intuition pump.)
It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)
(There have also been technical arguments against specific implementations of wireheading. I’m not concerned with those, as long as they don’t show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don’t even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don’t see. I fear that I have a large inferential gap here, so please be explicit and assume I’m a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about “valueing something” sounds like someone talking about “feeling the presence of the Holy Ghost”. I don’t mean this in a derogatory way, but the pattern “sense something funny, therefore some very specific and otherwise unsupported claim” matches. How do you know it’s not just, you know, indigestion?
What is this “valuing”? How do you know that something is a “value”, terminal or not? How do you know what it’s about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these “values” come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it’s all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it’s equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I “wanted” something, but getting it didn’t trigger a rewarding feeling, I wouldn’t take that as evidence that I “value” the activity for its own sake. I’d assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?