The STV supposes that pleasantness is valuable independently from the agent’s embedding in reality—thus is a Pixie Dust Theory of Happiness, that I indeed argue against in my essay (see section “A Pixie Dust Theory of Happiness”).
While the examples and repetition used in the paragraph cited are supposed to elicit a strong emotion, the underlying point holds: If you’re trying to find the most emotional happiness intensive moment to reproduce, a violent joyful emotion from an insane criminal mastermind is more likely to be it than a peaceful zen moment by a mellow sage. The extreme negative cost to the victims, however great, is in this hypothesis only accounted once; it is thus dwarfed by the infinitely-replicated benefit to the criminal.
Emotions are a guide. You ought to feel them, and if they’re wrong, you ought to explain them away, not ignore them. But, especially in an already long essay, it’s easier and more convincing to show than to explain. If mass murder in the name of wireheading feels deeply wrong, that’s a very strongly valid argument that indeed it is. Maybe I should update the essay to add this very explanation right afterwards.
Admittedly, my essay may not optimized to the audience of LessWrong, but that’s my first couple essays, optimized for my preexisting audience. I wanted to share it here because of the topic, which is extremely relevant to LessWrong.
Finally I’ll reply to meta with meta: if you are “totally out of [your] depth… and am not interested in learning it”, that’s perfectly fine, but then you should disqualify yourself from having an opinion on an “objective utility function of the universe” that you start by claiming you believe in, when you later admit that understanding one issue depends on understanding the other. Or maybe you somehow have an independent proof using a completely different line of argument that makes you confident enough not to look at my argument—in which case you should express more sympathy towards those who’d dismiss Jeff’s argument as insane without examining his in detail.
If wireheading were a serious policy proposal being actively pursued with non-negligible chances of success, I would be shooting to kill wireheaders, not arguing with them.
I am arguing precisely because Jeff and other people musing about wireheading are not actual criminals—but might inspire a future criminal AI if their argument is accepted.
Arguing about a thought experiment means taking it seriously, which I do. And if the conclusion is criminal, this is an important point that needs to be stated. When George Bernard Shaw calmly claims the political necessity of large scale extermination of people unfit for his socialist paradise, and doing it scientifically, he is not being a criminal—but it is extremely relevant to note that his ideas, if implemented, would be criminal, and that accepting them as true might indeed inspire criminals to act, and inspire good people to let criminals act.
If I am not to take wireheading seriously, there is nothing to argue. Just a good laugh to have.
And I am not angry at all about wireheading. But apparently, the first post of this series made a lot of commenters angry indeed who took it personally.