Acting based on the feelings one will experience is something that already happens, so optimizing for it is sensible
I can’t really pick apart your logic here, because there isn’t any. This is like saying “buying cheese is something that already happens, so optimizing for it is sensible”
We already know that rewards and punishments influence our actions. Any utopia would try to satisfy them. Even in a complex optimized universe full of un-wireheaded sentients caring about external referents, people would want to avoid pain, … and experience lots of excitement, … . Wireheading just says, that’s all humans care about, so there’s no need for all these constraints, let’s pick the obvious shortcut.
In support of this view, I gave the example of the entertainment industry that optimizes said experiences, but is completely fake (and trying to become more fake) and how many humans react positively to that. They don’t complain that there’s something missing, but rather enjoy those improved experiences more than the existent externally referenced alternatives.
Also, take the reversed experience machine, in which the majority of students asked would stay plugged in. If they had complex preferences as typically cited against wireheading, wouldn’t they have immediately rejected it? An expected paperclip maximizer would have left the machine right away. It can’t build any paperclips there, so the machine has no value to it. But the reversed experience machine seems to have plenty of value for humans.
This is essentially an outside view argument against complex preferences. What’s the evidence that they actually exist? That people care about reality, about referents, all that? When presented with options that don’t fulfill any of this, lots of people still seem to choose them.
So, when people pick chocolate, it illustrates that that’s what they truly desire, and when they pick vanilla, it just means that they’re confused and really they like chocolate but they don’t know it.
I can’t really pick apart your logic here, because there isn’t any. This is like saying “buying cheese is something that already happens, so optimizing for it is sensible”
Not really. Let me try to clarify what I meant.
We already know that rewards and punishments influence our actions. Any utopia would try to satisfy them. Even in a complex optimized universe full of un-wireheaded sentients caring about external referents, people would want to avoid pain, … and experience lots of excitement, … . Wireheading just says, that’s all humans care about, so there’s no need for all these constraints, let’s pick the obvious shortcut.
In support of this view, I gave the example of the entertainment industry that optimizes said experiences, but is completely fake (and trying to become more fake) and how many humans react positively to that. They don’t complain that there’s something missing, but rather enjoy those improved experiences more than the existent externally referenced alternatives.
Also, take the reversed experience machine, in which the majority of students asked would stay plugged in. If they had complex preferences as typically cited against wireheading, wouldn’t they have immediately rejected it? An expected paperclip maximizer would have left the machine right away. It can’t build any paperclips there, so the machine has no value to it. But the reversed experience machine seems to have plenty of value for humans.
This is essentially an outside view argument against complex preferences. What’s the evidence that they actually exist? That people care about reality, about referents, all that? When presented with options that don’t fulfill any of this, lots of people still seem to choose them.
So, when people pick chocolate, it illustrates that that’s what they truly desire, and when they pick vanilla, it just means that they’re confused and really they like chocolate but they don’t know it.