Let’s say that regardless of your choice you will be selectively memory-wiped so that you will no knowledge of having been offered the choice or making one.
Would you press the button then? You will be twenty dollars richer if you press it, and the world will get destroyed 5 minutes after your death.
If I still know that the world will be destroyed while I was pressing the button, then no. The fact that I lose memories isn’t any more significant than the fact that I die. I still experience violating my values while pressing the button.
If you press the button, you’ll be memory-modified into thinking you chose not to press it.
If you don’t press the button, you’ll be memory-modified into thinking you pressed it.
Do you press the button now? With this scenario you’ll have a longer experience of remembering yourself violating your values if you don’t violate them. If you want to not remember violating your values, you’ll need to violate them.
I confess that I’m still on the fence about the underlying philosophical question here.
The answer is that I still don’t press the button, because I just won’t. I’m not sure if that’s a decision that’s consistent with my other values or not.
Essentially the process is: As I make the decision, I have the knowledge that pressing the button will destroy the world, which makes me sad. I also have the knowledge that I’ll spend the rest of my life thinking I’ll press the button, which also makes me sad. But knowing (in the immediate future) that I destroyed the world makes me more sad than knowing that I ruined my life, so I still don’t press it.
The underlying issue is “do I count as the same person after I’ve been memory modified?” I don’t think I do. So my utility evaluation is “I’m killing myself right now, then creating a world with a new happy person but a world that will be destroyed.” I don’t get to reap the benefits of any of it, so it’s just a question of greater overall utility.
But I realize that I actually modify my own memory in small ways all the time, and I’m not sure how I feel about that. I guess I prefer to live in a world where people don’t mindhack themselves to think they do things that harm me without feeling guilty. To help create that world, I try not to mindhack myself to not feel guilty about harming other people.
I think you’re striving too much to justify your position on the basis of sheer self-interest (that you want to experience being such a person, that you want to live in such a world) -- that you’re missing the more obvious solution that your utility function isn’t completely selfish, that you care about the rest of the real world, not just your own subjective experiences.
If you didn’t care about other people for themselves, you wouldn’t care about experiencing being the sort of person who cares about other people.
If you didn’t care about the future of humanity for itself, you wouldn’t care about whether you’re the sort of person who presses or doesn’t press the button.
Oh I totally agree. But satisfying my utility function is still based on my own subjective experiences.
The original comment, which I agreed with, wasn’t framing things in terms of “do I care more about myself or about saving the world.” It was about “do I care about PERSONALLY having experiences or about other people who happen to be similar/identical to me having those experiences?”
If there are multiple copies of me, and one of them dies, I didn’t get smaller. One of them died. If I get uploaded to a server and then continue on my life, periodically hearing about how another copy of me is having transhuman sex with every Hollywood celebrity at the same time, I didn’t get to have that experience. And if a clone of me saves the world, I didn’t get to actually save the world.
I would rather save the world than have a clone do it. (But that preference is not so strong that I’d rather have the world saved less than optimally if it meant I got to do it instead of a clone)
Let’s say that regardless of your choice you will be selectively memory-wiped so that you will no knowledge of having been offered the choice or making one.
Would you press the button then? You will be twenty dollars richer if you press it, and the world will get destroyed 5 minutes after your death.
If I still know that the world will be destroyed while I was pressing the button, then no. The fact that I lose memories isn’t any more significant than the fact that I die. I still experience violating my values while pressing the button.
Cool, then one last scenario:
If you press the button, you’ll be memory-modified into thinking you chose not to press it.
If you don’t press the button, you’ll be memory-modified into thinking you pressed it.
Do you press the button now? With this scenario you’ll have a longer experience of remembering yourself violating your values if you don’t violate them. If you want to not remember violating your values, you’ll need to violate them.
I confess that I’m still on the fence about the underlying philosophical question here.
The answer is that I still don’t press the button, because I just won’t. I’m not sure if that’s a decision that’s consistent with my other values or not.
Essentially the process is: As I make the decision, I have the knowledge that pressing the button will destroy the world, which makes me sad. I also have the knowledge that I’ll spend the rest of my life thinking I’ll press the button, which also makes me sad. But knowing (in the immediate future) that I destroyed the world makes me more sad than knowing that I ruined my life, so I still don’t press it.
The underlying issue is “do I count as the same person after I’ve been memory modified?” I don’t think I do. So my utility evaluation is “I’m killing myself right now, then creating a world with a new happy person but a world that will be destroyed.” I don’t get to reap the benefits of any of it, so it’s just a question of greater overall utility.
But I realize that I actually modify my own memory in small ways all the time, and I’m not sure how I feel about that. I guess I prefer to live in a world where people don’t mindhack themselves to think they do things that harm me without feeling guilty. To help create that world, I try not to mindhack myself to not feel guilty about harming other people.
I think you’re striving too much to justify your position on the basis of sheer self-interest (that you want to experience being such a person, that you want to live in such a world) -- that you’re missing the more obvious solution that your utility function isn’t completely selfish, that you care about the rest of the real world, not just your own subjective experiences.
If you didn’t care about other people for themselves, you wouldn’t care about experiencing being the sort of person who cares about other people. If you didn’t care about the future of humanity for itself, you wouldn’t care about whether you’re the sort of person who presses or doesn’t press the button.
Oh I totally agree. But satisfying my utility function is still based on my own subjective experiences.
The original comment, which I agreed with, wasn’t framing things in terms of “do I care more about myself or about saving the world.” It was about “do I care about PERSONALLY having experiences or about other people who happen to be similar/identical to me having those experiences?”
If there are multiple copies of me, and one of them dies, I didn’t get smaller. One of them died. If I get uploaded to a server and then continue on my life, periodically hearing about how another copy of me is having transhuman sex with every Hollywood celebrity at the same time, I didn’t get to have that experience. And if a clone of me saves the world, I didn’t get to actually save the world.
I would rather save the world than have a clone do it. (But that preference is not so strong that I’d rather have the world saved less than optimally if it meant I got to do it instead of a clone)
I entirely agree—I noticed Raemon’s comment earlier and was vaguely planning to say something like this, but you’ve put it very eloquently.