Luckily, once awesomeness pills become available, there probably won’t be starving children, so that point seems moot.
This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve—and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff—then sure, why not wirehead? It’s not like there’s anything useful I could be doing instead.
But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.
And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that’s the case), though it’s not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it’s not clear why we ought to do that, either.
I can’t imagine feeling intense achievement separated from actually flying—or imagining that I’m flying—a spaceship
Do you mean to distinguish this from believing that you have flown a spaceship?
Don’t we have to do it (lying to people) because we value other people being happy? I’d rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I’m not about to wirehead you though)
Do you mean to distinguish this from believing that you have flown a spaceship?
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can’t imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don’t have an achievement slider to max; it just means I can’t imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout—really, that’s the only thing I can say; it feels like it won’t work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can’t imagine how an awesomeness pill would feel, hence I can’t dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit… well, we have a few basic choices:
Change the world so that reality makes them happy
Change them so that reality makes them happy
Lie to them about reality, so that they’re happy
Accept that they aren’t happy
If I’m understanding your scenario properly, we don’t want to do the first because it leaves more people worse off, and we don’t want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don’t know, but I’ll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?
I attach negative utility to getting my utility function changed—I wouldn’t change myself to maximize paperclips. I also attach negative utility to getting my memory modified—I don’t like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I’d prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.
Thanks for summarizing my argument. I guess I need to work on expressing myself so I don’t force other people to work through my roundaboutness :)
Fair enough. If you have any insight into why your preferences rank in this way, I’d be interested, but I accept that they are what they are.
However, I’m now confused about your claim.
Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you’d like us to treat you? Or are you assuming that other people have the same preferences you do?
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn’t change the fact that it also means I’d eat a ton of babies, which makes the option a huge source of utility- currently—I wouldn’t want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I’m just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me—the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don’t value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale—after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that’s happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function—it can’t change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.
My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I’d rather take that than the real thing. Hence, I’d want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn’t believe in my way unless I though it better. Of course, I am also assuming that truth isn’t a terminal value to them. In the same way, since I don’t want my utility function changed, I’d rather not do it to them.
This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve—and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff—then sure, why not wirehead? It’s not like there’s anything useful I could be doing instead.
But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.
And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that’s the case), though it’s not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it’s not clear why we ought to do that, either.
Do you mean to distinguish this from believing that you have flown a spaceship?
Don’t we have to do it (lying to people) because we value other people being happy? I’d rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I’m not about to wirehead you though)
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can’t imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don’t have an achievement slider to max; it just means I can’t imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout—really, that’s the only thing I can say; it feels like it won’t work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can’t imagine how an awesomeness pill would feel, hence I can’t dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit… well, we have a few basic choices:
Change the world so that reality makes them happy
Change them so that reality makes them happy
Lie to them about reality, so that they’re happy
Accept that they aren’t happy
If I’m understanding your scenario properly, we don’t want to do the first because it leaves more people worse off, and we don’t want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don’t know, but I’ll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?
I attach negative utility to getting my utility function changed—I wouldn’t change myself to maximize paperclips. I also attach negative utility to getting my memory modified—I don’t like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I’d prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.
Thanks for summarizing my argument. I guess I need to work on expressing myself so I don’t force other people to work through my roundaboutness :)
Fair enough. If you have any insight into why your preferences rank in this way, I’d be interested, but I accept that they are what they are.
However, I’m now confused about your claim.
Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you’d like us to treat you? Or are you assuming that other people have the same preferences you do?
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn’t change the fact that it also means I’d eat a ton of babies, which makes the option a huge source of utility- currently—I wouldn’t want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I’m just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me—the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don’t value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale—after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that’s happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function—it can’t change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.
My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I’d rather take that than the real thing. Hence, I’d want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn’t believe in my way unless I though it better. Of course, I am also assuming that truth isn’t a terminal value to them. In the same way, since I don’t want my utility function changed, I’d rather not do it to them.