If your values are, in fact, as you’ve described them, then it’s a fairly extreme value dissonance.
Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?
If you’re serious about thinking that effects on the world you won’t personally experience are irrelevant to you, then presumably your answer is to shrug indifferently… there’s no reason to prefer pressing the button to not pressing it, or vice-versa, since neither activity has any effects that matter.
OTOH, if I offered you twenty bucks to press the button, presumably you would. Why not? After all, nothing important happens as a consequence, and you get twenty bucks.
Most people would claim they wouldn’t press the button. Of course, that might be a pure signaling effect.
I have thought about similar scenarios and with the caveat that I’m speaking from introspection and intuitions and not the real thought pattern I would go through given the real choice, yes, I would be mostly indifferent about the button in the first example and would press it in the second.
Fair enough. It follows that, while for all I know you may be a remarkably pleasant fellow and a great friend, I really hope you don’t ever get any significant amount of power to affect the future state of the world.
Well, how exactly am I supposed to do this? I can’t convincingly pretend to blow up the world, so there’s always this caveat. Like in the case of the trolley problem, I suspect that I would simply freeze or be unable to make any reasonable decision simply due to the extreme stress and signalling problem, regardless of what my current introspection says.
I might be mistaken about myself, but I am certainly honest. I would agree with your judgment that it is sociopathic, as long as its understood in its psychiatric sense and not meaning “not nice” or “doesn’t follow rules”.
Let’s say that regardless of your choice you will be selectively memory-wiped so that you will no knowledge of having been offered the choice or making one.
Would you press the button then? You will be twenty dollars richer if you press it, and the world will get destroyed 5 minutes after your death.
If I still know that the world will be destroyed while I was pressing the button, then no. The fact that I lose memories isn’t any more significant than the fact that I die. I still experience violating my values while pressing the button.
If you press the button, you’ll be memory-modified into thinking you chose not to press it.
If you don’t press the button, you’ll be memory-modified into thinking you pressed it.
Do you press the button now? With this scenario you’ll have a longer experience of remembering yourself violating your values if you don’t violate them. If you want to not remember violating your values, you’ll need to violate them.
I confess that I’m still on the fence about the underlying philosophical question here.
The answer is that I still don’t press the button, because I just won’t. I’m not sure if that’s a decision that’s consistent with my other values or not.
Essentially the process is: As I make the decision, I have the knowledge that pressing the button will destroy the world, which makes me sad. I also have the knowledge that I’ll spend the rest of my life thinking I’ll press the button, which also makes me sad. But knowing (in the immediate future) that I destroyed the world makes me more sad than knowing that I ruined my life, so I still don’t press it.
The underlying issue is “do I count as the same person after I’ve been memory modified?” I don’t think I do. So my utility evaluation is “I’m killing myself right now, then creating a world with a new happy person but a world that will be destroyed.” I don’t get to reap the benefits of any of it, so it’s just a question of greater overall utility.
But I realize that I actually modify my own memory in small ways all the time, and I’m not sure how I feel about that. I guess I prefer to live in a world where people don’t mindhack themselves to think they do things that harm me without feeling guilty. To help create that world, I try not to mindhack myself to not feel guilty about harming other people.
I think you’re striving too much to justify your position on the basis of sheer self-interest (that you want to experience being such a person, that you want to live in such a world) -- that you’re missing the more obvious solution that your utility function isn’t completely selfish, that you care about the rest of the real world, not just your own subjective experiences.
If you didn’t care about other people for themselves, you wouldn’t care about experiencing being the sort of person who cares about other people.
If you didn’t care about the future of humanity for itself, you wouldn’t care about whether you’re the sort of person who presses or doesn’t press the button.
Oh I totally agree. But satisfying my utility function is still based on my own subjective experiences.
The original comment, which I agreed with, wasn’t framing things in terms of “do I care more about myself or about saving the world.” It was about “do I care about PERSONALLY having experiences or about other people who happen to be similar/identical to me having those experiences?”
If there are multiple copies of me, and one of them dies, I didn’t get smaller. One of them died. If I get uploaded to a server and then continue on my life, periodically hearing about how another copy of me is having transhuman sex with every Hollywood celebrity at the same time, I didn’t get to have that experience. And if a clone of me saves the world, I didn’t get to actually save the world.
I would rather save the world than have a clone do it. (But that preference is not so strong that I’d rather have the world saved less than optimally if it meant I got to do it instead of a clone)
You mean their verbal endorsement of ‘not pressing’ is a pure signaling effect? Or that they have an actual policy of ‘not pressing’ but one which has been adopted for signaling reasons? (Or that the difference is moot anyway since the ‘button’ is very far from existing?)
Well I think most people care about (in ascending order of distance) relatives, friends, causes to which they have devoted themselves, the progress of science or whichever cultural traditions they identify with, objects and places of great beauty—whether natural or manmade. Putting all of this down to ‘signaling’, even if true, is about as relevant and informative as putting it all down to ‘neurons’.
I meant that their claim (verbal endorsement) might be a pure signaling effect and not actually a reliable predictor of whether they would press the button.
I also agree that to say that something is a signaling effect is not saying very much.
Most people would claim they wouldn’t press the button. Of course, that might be a pure signaling effect.
Most people believe in an afterlife, which alters that scenario a bit. Of the remainder, though, I think it’s clear that it’s very important to signal that you wouldn’t push that button, since it’s an obvious stand-in for all the other potential situations where you could benefit yourself at the expense of others.
If your values are, in fact, as you’ve described them, then it’s a fairly extreme value dissonance.
Just to make this maximally concrete: if you were given a magic button that, if pressed, caused the world to end five minutes after your death, would you press the button?
If you’re serious about thinking that effects on the world you won’t personally experience are irrelevant to you, then presumably your answer is to shrug indifferently… there’s no reason to prefer pressing the button to not pressing it, or vice-versa, since neither activity has any effects that matter.
OTOH, if I offered you twenty bucks to press the button, presumably you would. Why not? After all, nothing important happens as a consequence, and you get twenty bucks.
Most people would claim they wouldn’t press the button. Of course, that might be a pure signaling effect.
I have thought about similar scenarios and with the caveat that I’m speaking from introspection and intuitions and not the real thought pattern I would go through given the real choice, yes, I would be mostly indifferent about the button in the first example and would press it in the second.
Fair enough. It follows that, while for all I know you may be a remarkably pleasant fellow and a great friend, I really hope you don’t ever get any significant amount of power to affect the future state of the world.
In his defense he says:
But since he seems aware of this, he ought to align his “introspection and intuitions” with this.
Well, how exactly am I supposed to do this? I can’t convincingly pretend to blow up the world, so there’s always this caveat. Like in the case of the trolley problem, I suspect that I would simply freeze or be unable to make any reasonable decision simply due to the extreme stress and signalling problem, regardless of what my current introspection says.
Try as I might, I cannot help but regard this statement as either dishonest or sociopathic.
I might be mistaken about myself, but I am certainly honest. I would agree with your judgment that it is sociopathic, as long as its understood in its psychiatric sense and not meaning “not nice” or “doesn’t follow rules”.
Voted up for honesty, not agreement.
Which reminds me of this:
I wouldn’t press the button so that I could experience being the sort of person who wouldn’t press the button.
Let’s say that regardless of your choice you will be selectively memory-wiped so that you will no knowledge of having been offered the choice or making one.
Would you press the button then? You will be twenty dollars richer if you press it, and the world will get destroyed 5 minutes after your death.
If I still know that the world will be destroyed while I was pressing the button, then no. The fact that I lose memories isn’t any more significant than the fact that I die. I still experience violating my values while pressing the button.
Cool, then one last scenario:
If you press the button, you’ll be memory-modified into thinking you chose not to press it.
If you don’t press the button, you’ll be memory-modified into thinking you pressed it.
Do you press the button now? With this scenario you’ll have a longer experience of remembering yourself violating your values if you don’t violate them. If you want to not remember violating your values, you’ll need to violate them.
I confess that I’m still on the fence about the underlying philosophical question here.
The answer is that I still don’t press the button, because I just won’t. I’m not sure if that’s a decision that’s consistent with my other values or not.
Essentially the process is: As I make the decision, I have the knowledge that pressing the button will destroy the world, which makes me sad. I also have the knowledge that I’ll spend the rest of my life thinking I’ll press the button, which also makes me sad. But knowing (in the immediate future) that I destroyed the world makes me more sad than knowing that I ruined my life, so I still don’t press it.
The underlying issue is “do I count as the same person after I’ve been memory modified?” I don’t think I do. So my utility evaluation is “I’m killing myself right now, then creating a world with a new happy person but a world that will be destroyed.” I don’t get to reap the benefits of any of it, so it’s just a question of greater overall utility.
But I realize that I actually modify my own memory in small ways all the time, and I’m not sure how I feel about that. I guess I prefer to live in a world where people don’t mindhack themselves to think they do things that harm me without feeling guilty. To help create that world, I try not to mindhack myself to not feel guilty about harming other people.
I think you’re striving too much to justify your position on the basis of sheer self-interest (that you want to experience being such a person, that you want to live in such a world) -- that you’re missing the more obvious solution that your utility function isn’t completely selfish, that you care about the rest of the real world, not just your own subjective experiences.
If you didn’t care about other people for themselves, you wouldn’t care about experiencing being the sort of person who cares about other people. If you didn’t care about the future of humanity for itself, you wouldn’t care about whether you’re the sort of person who presses or doesn’t press the button.
Oh I totally agree. But satisfying my utility function is still based on my own subjective experiences.
The original comment, which I agreed with, wasn’t framing things in terms of “do I care more about myself or about saving the world.” It was about “do I care about PERSONALLY having experiences or about other people who happen to be similar/identical to me having those experiences?”
If there are multiple copies of me, and one of them dies, I didn’t get smaller. One of them died. If I get uploaded to a server and then continue on my life, periodically hearing about how another copy of me is having transhuman sex with every Hollywood celebrity at the same time, I didn’t get to have that experience. And if a clone of me saves the world, I didn’t get to actually save the world.
I would rather save the world than have a clone do it. (But that preference is not so strong that I’d rather have the world saved less than optimally if it meant I got to do it instead of a clone)
I entirely agree—I noticed Raemon’s comment earlier and was vaguely planning to say something like this, but you’ve put it very eloquently.
You mean their verbal endorsement of ‘not pressing’ is a pure signaling effect? Or that they have an actual policy of ‘not pressing’ but one which has been adopted for signaling reasons? (Or that the difference is moot anyway since the ‘button’ is very far from existing?)
Well I think most people care about (in ascending order of distance) relatives, friends, causes to which they have devoted themselves, the progress of science or whichever cultural traditions they identify with, objects and places of great beauty—whether natural or manmade. Putting all of this down to ‘signaling’, even if true, is about as relevant and informative as putting it all down to ‘neurons’.
I meant that their claim (verbal endorsement) might be a pure signaling effect and not actually a reliable predictor of whether they would press the button.
I also agree that to say that something is a signaling effect is not saying very much.
Most people believe in an afterlife, which alters that scenario a bit. Of the remainder, though, I think it’s clear that it’s very important to signal that you wouldn’t push that button, since it’s an obvious stand-in for all the other potential situations where you could benefit yourself at the expense of others.