I’m not sure that pjeby has fully adressed Eliezer’s concern that “eliminating my negative emotions would be changing my preferences, and changing my preferences so that they’re satisfied is against my current preferences (otherwise, I’d just go for being an orgasmium)”.
(Well, at least that’s how I’d paraphrase it, Eliezer, tell me if I’m wrong)
To which I would answer:
Yes, it’s very possible that eliminating some negative emotions would be immoral, or at least, would change one’s preferences in a way my previous preferences would disagree with (think: eliminating the guilt over killing people, and things like that. I wouldn’t be very happy to learn that the army or police of a dictatorship is researching emotion elimination)
Still, there is probably a wide range of negative feelings that could be removed in a way that doesn’t contradict one’s original preferences—in the sense that the pre-modification person wouldn’t find the behaviour of the modified person objectionable.
The line between which changes are OK and which are not is not that obvious to draw, and many posts on OB talk about it (The difference between the morality of the ancient greek and our own, and thus the risk of “freezing” our own morality and barring future moral progress, the Confessor’s objections to non-consensual sex, etc.). pjeby might be being a bit light-handed when he dismisses concerns over changing preferences as “irrational”, but I think he meant that careful examination could show that those changes stayed in the second category and wouldn’t turn one into a immoral monster.
(It feels a bit weird answering pjeby’s post in the third person, but it felt clearer to me that way :P I’m not responding to this post in particular)
(Disclaimer: I’m one of pjeby’s clients, but that’s not why I’m here, I’ve been reading OvercomingBias since nearly the beginning)
pjeby might be being a bit light-handed when he dismisses concerns over changing preferences as “irrational”
I didn’t (explicitly) dismiss those concerns; I said that away-from reasoning has a higher rationality standard to meet, in part because it’s likely to be vague.
I wasn’t even thinking about preference-changing being dangerous, because our preferences are largely independent and mostly don’t “auto-update” when we change one—there’s a LOT of redundancy. So if a specific change isn’t compatible with your overall morality, you’ll note the dissonance, and change your preferences again to tune things better.
Science-fictional evidence of preference-changing is about as far off as science-fictional evidence of AI behavior… and for the same reasons. The built-in models our brain uses to understand minds and their preferences, are simpler than the models the brain uses to create a mind… and its preferences.
Offtopic: Shortly after you posted this, it appears that someone undertook a massive vote-down campaign, systematically searching for every comment I’ve ever posted to LW, and voting it down by 1. I don’t know if, or how these events are correlated.
But, if the person who undertook that campaign was trying to send me a message of some sort, they neglected to include any actionable information content. I only noticed because the karma number suddenly and dramatically changed when I clicked through from one page to another, reading this morning’s new comments.… and that sudden large drop was weird enough to make me investigate.
Otherwise, I probably never would’ve been aware of their action, as an action, let alone as any sort of feedback! If you want to communicate something to someone, it’s probably best to be more explicit. Or, in the alternative, contribute a patch to the LW software to let you filter out posts by people you don’t like, or perhaps the entire subthreads they participate in.
I wish this place worked like StackOverflow, where you can only downvote once you have 100 karma; that would probably reduce the background noise in the voting …
Interesting thread!
I’m not sure that pjeby has fully adressed Eliezer’s concern that “eliminating my negative emotions would be changing my preferences, and changing my preferences so that they’re satisfied is against my current preferences (otherwise, I’d just go for being an orgasmium)”.
(Well, at least that’s how I’d paraphrase it, Eliezer, tell me if I’m wrong)
To which I would answer:
Yes, it’s very possible that eliminating some negative emotions would be immoral, or at least, would change one’s preferences in a way my previous preferences would disagree with (think: eliminating the guilt over killing people, and things like that. I wouldn’t be very happy to learn that the army or police of a dictatorship is researching emotion elimination)
Still, there is probably a wide range of negative feelings that could be removed in a way that doesn’t contradict one’s original preferences—in the sense that the pre-modification person wouldn’t find the behaviour of the modified person objectionable.
The line between which changes are OK and which are not is not that obvious to draw, and many posts on OB talk about it (The difference between the morality of the ancient greek and our own, and thus the risk of “freezing” our own morality and barring future moral progress, the Confessor’s objections to non-consensual sex, etc.). pjeby might be being a bit light-handed when he dismisses concerns over changing preferences as “irrational”, but I think he meant that careful examination could show that those changes stayed in the second category and wouldn’t turn one into a immoral monster.
(It feels a bit weird answering pjeby’s post in the third person, but it felt clearer to me that way :P I’m not responding to this post in particular)
(Disclaimer: I’m one of pjeby’s clients, but that’s not why I’m here, I’ve been reading OvercomingBias since nearly the beginning)
I didn’t (explicitly) dismiss those concerns; I said that away-from reasoning has a higher rationality standard to meet, in part because it’s likely to be vague.
I wasn’t even thinking about preference-changing being dangerous, because our preferences are largely independent and mostly don’t “auto-update” when we change one—there’s a LOT of redundancy. So if a specific change isn’t compatible with your overall morality, you’ll note the dissonance, and change your preferences again to tune things better.
Science-fictional evidence of preference-changing is about as far off as science-fictional evidence of AI behavior… and for the same reasons. The built-in models our brain uses to understand minds and their preferences, are simpler than the models the brain uses to create a mind… and its preferences.
Offtopic: Shortly after you posted this, it appears that someone undertook a massive vote-down campaign, systematically searching for every comment I’ve ever posted to LW, and voting it down by 1. I don’t know if, or how these events are correlated.
But, if the person who undertook that campaign was trying to send me a message of some sort, they neglected to include any actionable information content. I only noticed because the karma number suddenly and dramatically changed when I clicked through from one page to another, reading this morning’s new comments.… and that sudden large drop was weird enough to make me investigate.
Otherwise, I probably never would’ve been aware of their action, as an action, let alone as any sort of feedback! If you want to communicate something to someone, it’s probably best to be more explicit. Or, in the alternative, contribute a patch to the LW software to let you filter out posts by people you don’t like, or perhaps the entire subthreads they participate in.
Well, it wasn’t me :)
I wish this place worked like StackOverflow, where you can only downvote once you have 100 karma; that would probably reduce the background noise in the voting …