Extrapolated humanity decides that the best possible outcome is to become the Affront. Now, if the FAI put everyone in a separate VR and tricked him into believing that he was acting all Affront-like, then everything would be great—everyone would be content. However, people don’t just want the experience of being the Affront—everyone agrees that they want to be truly interacting with other sentiences which will often feel the brunt of each other’s coercive action.
Original version of grandparent contained, before I deleted it, “Besides the usual ‘Eating babies is wrong, what if CEV outputs eating babies, therefore a better solution is CEV plus code that outlaws eating babies.’”
If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different.
If you don’t understand what I’m saying, the first two paragraphs of this comment might explain it better.
Oh. In that case, it might be more precise to say that your utility function does not assign positive or negative utility to the suffering of others (if I’m interpreting your statement correctly). However, I’m curious about whether this statement holds true for you at extremes, so here’s a hypothetical.
I’m going to assume that you like ice cream. If you don’t like any sort of ice cream, substitute in a certain quantity of your favorite cookie. If you could get a scoop of ice cream (or a cookie) for free at the cost of a million babies thumbs cut off, would you take the ice cream/cookie?
If not, then you assign a non-zero utility to others suffering, so it might be true that you care very little, but it’s not true that you don’t care at all.
I think you misunderstand slightly. Sensory experience includes having the idea communicated to me that my action is causing suffering. I assign negative utility to other’s suffering in real life because the thought of such suffering is unpleasant.
Alright. Would you take the offer if Omega promised that he would remove your memories of the agreement of having a million babies’ thumbs cut off for a scoop of ice cream right after you made the agreement, so you could enjoy your ice-cream without guilt?
no, at the time of the decision i have sensory experience of having been the cause of suffering.
I don’t feel responsibility to those who suffer in that I would choose to holodeck myself rather than stay in reality and try to fix problems. this does not mean that I will cause suffering on purpose.
a better hypothetical dilemma might be if I could ONLY get access to the holodeck if I cause others to suffer (cypher from the matrix).
Mmkay. I would say that our utility functions are pretty different, in that case, since, with regard to suffering, I value world-states according to how much suffering they contain, not according to who causes the suffering.
I don’t understand this. If the singleton’s utility function was written such that it’s highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?
I don’t think that my brain was working optimally at 1am last night.
My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn’t have the precision of thought necessary to put it into such language). However, it’s pretty absurd for us to be telling our CEV what to do, considering that they’ll have much more information than we do and much more refined thinking processes. I actually don’t think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).
My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner’s dilemma). However, the counterargument from point 1 also applies to this point.
Extrapolated humanity decides that the best possible outcome is to become the Affront. Now, if the FAI put everyone in a separate VR and tricked him into believing that he was acting all Affront-like, then everything would be great—everyone would be content. However, people don’t just want the experience of being the Affront—everyone agrees that they want to be truly interacting with other sentiences which will often feel the brunt of each other’s coercive action.
Original version of grandparent contained, before I deleted it, “Besides the usual ‘Eating babies is wrong, what if CEV outputs eating babies, therefore a better solution is CEV plus code that outlaws eating babies.’”
I have never understood what is wrong with the amnesia-holodecking scenario. (is there a proper name for this?)
If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different.
If you don’t understand what I’m saying, the first two paragraphs of this comment might explain it better.
thanks for clarifying. I guess I’m evil. It’s a good thing to know about oneself.
Uh, that was a joke, right?
no.
What definition of evil are you using? I’m having trouble understanding why (how?) you would declare yourself evil, especially evil_nazgulnarsil.
i don’t care about suffering independent of my sensory perception of it causing me distress.
Oh. In that case, it might be more precise to say that your utility function does not assign positive or negative utility to the suffering of others (if I’m interpreting your statement correctly). However, I’m curious about whether this statement holds true for you at extremes, so here’s a hypothetical.
I’m going to assume that you like ice cream. If you don’t like any sort of ice cream, substitute in a certain quantity of your favorite cookie. If you could get a scoop of ice cream (or a cookie) for free at the cost of a million babies thumbs cut off, would you take the ice cream/cookie?
If not, then you assign a non-zero utility to others suffering, so it might be true that you care very little, but it’s not true that you don’t care at all.
I think you misunderstand slightly. Sensory experience includes having the idea communicated to me that my action is causing suffering. I assign negative utility to other’s suffering in real life because the thought of such suffering is unpleasant.
Alright. Would you take the offer if Omega promised that he would remove your memories of the agreement of having a million babies’ thumbs cut off for a scoop of ice cream right after you made the agreement, so you could enjoy your ice-cream without guilt?
no, at the time of the decision i have sensory experience of having been the cause of suffering.
I don’t feel responsibility to those who suffer in that I would choose to holodeck myself rather than stay in reality and try to fix problems. this does not mean that I will cause suffering on purpose.
a better hypothetical dilemma might be if I could ONLY get access to the holodeck if I cause others to suffer (cypher from the matrix).
Okay, so you would feel worse if you had caused people the same amount of suffering than you would if someone else had done so?
yes
Mmkay. I would say that our utility functions are pretty different, in that case, since, with regard to suffering, I value world-states according to how much suffering they contain, not according to who causes the suffering.
Well, it’s essentially equivalent to wireheading.
which I also plan to do if everything goes tits-up.
Dorikka,
I don’t understand this. If the singleton’s utility function was written such that it’s highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?
I don’t think that my brain was working optimally at 1am last night.
My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn’t have the precision of thought necessary to put it into such language). However, it’s pretty absurd for us to be telling our CEV what to do, considering that they’ll have much more information than we do and much more refined thinking processes. I actually don’t think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).
My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner’s dilemma). However, the counterargument from point 1 also applies to this point.