A similar problem can be created with other scenarios. For instance, suppose you are planning on spending all day doing some unpleasant activity that will greatly benefit you in the future. Omega tells you that some mad scientist plans on making a very large amount of independent lockstep-identical brain* emulators of you that will have the exact same experiences you will be having today, and then be painlessly stopped and deleted after the day is up (assume the unpleasant activity is solitary in order to avoid complications about him having to simulate other people too for the copies to have truly identical experiences).
Should you do the unpleasant activity, or should you sacrifice your future to try to make your many-copied day a good one?
I’m honestly unsure about this and it’s making me a little sick. I don’t want to have to live a crappy life because of weird anthropic scenarios. I have really complicated, but hopefully not inconsistent, moral values about copies of me, especially lockstep-identical ones, but I’m not sure how to apply them here. Generally I think that lockstep-identical copies whose lifetime utility is positive don’t add any value (I wouldn’t pay to create them), but it seems wrong to apply this lockstep-identical copies with negative lifetime utility (I might pay to avoid creating them). It seems obviously worse to create a hundred tortured lockstep copies than to create ten.
One fix that would allow me to act normally would be to add a stipulation to my values that in these kind of weird anthropic scenarios where most of my lockstep copies will die soon (and this is beyond my control), I get utility from taking actions that allow the whichever copies to survive to live good lives. If I decide to undergo the unpleasant experience for my future benefit, even if I have no idea if I’m going to be a surviving copy or not (but am reasonably certain there will be at least some surviving copies), I get utility that counterbalances the unpleasantness.
Obviously such a value would have to be tightly calibrated to avoid generating as crazy behavior as the problem I devised it to solve. It would have to only apply in weird lockstep anthropic scenarios and not inform the rest of my behavior at all. The utility would have to be high enough to counterbalance any dis-utility all of the mes would suffer, but low enough to avoid creating an incentive to create suffering-soon-to-die-lockstep-identical copies. It would also have to avoid creating an incentive for quantum suicide. I think it is possible to fit all these stipulations.
In fact, I’m not sure it’s really a severe modification of my values at all. The idea of doomed mes valiantly struggling to make sure that at least some of them will have decent lives in the future has a certain grandeur to it, like I’m defying fate. It seems like there are far less noble ways to die.
If anyone has a less crazy method of avoiding these dilemma’s though, please, please, please let me know. I like Wei Dai’s idea, but am not sure I understand MWI enough to fully get it. Also, I don’t know if it would apply to the artificially-created copy scenario in addition to the false vacuum one..
*By “lockstep” I mean that the copy will not just start out identical to me. It will have identical experiences to me for the duration of its lifetime. It may have a shorter lifespan than me, but for its duration the experiences will be the same (for instance, a copy of 18 year old me may be created and be deleted after a few days, but until it is deleted it will have the same experiences as 18 year old me did).
If anyone has a less crazy method of avoiding these dilemma’s though, please, please, please let me know.
Ignore them?
Why do you need answers to these questions, so intensely that being unsure is “making [you] a little sick”? There is no Omega, and he/she/it is not going to show up to create these scenarios. What difference will an answer make to any practical decision in front of you, here and now?
There is no Omega, and he/she/it is not going to show up to create these scenarios. What difference will an answer make to any practical decision in front of you, here and now?
While Omega is not real, it seems possible that naturally occurring things like false vacuum states and Boltzmann brains might be. I think that the possibility those things exist might create similar dilemmas, am disturbed by this fact, and wish to know how to resolve them. I’m pretty much certain there’s no Omega, but I’m not nearly as sure about false vacuums.
If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say “Pain Is Good”? What then?
Maybe you should hope that morality isn’t written into the structure of the universe. What if the structure of the universe says to do something horrible?
And if an external objective morality does say that the universe should occupy some horrifying state… let’s not even ask what you’re going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What’s the best news you could have gotten, reading that stone tablet?
Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?
Maybe you should just do that?
I mean… if an external objective morality tells you to kill people, why should you even listen?
A similar problem can be created with other scenarios. For instance, suppose you are planning on spending all day doing some unpleasant activity that will greatly benefit you in the future. Omega tells you that some mad scientist plans on making a very large amount of independent lockstep-identical brain* emulators of you that will have the exact same experiences you will be having today, and then be painlessly stopped and deleted after the day is up (assume the unpleasant activity is solitary in order to avoid complications about him having to simulate other people too for the copies to have truly identical experiences).
Should you do the unpleasant activity, or should you sacrifice your future to try to make your many-copied day a good one?
I’m honestly unsure about this and it’s making me a little sick. I don’t want to have to live a crappy life because of weird anthropic scenarios. I have really complicated, but hopefully not inconsistent, moral values about copies of me, especially lockstep-identical ones, but I’m not sure how to apply them here. Generally I think that lockstep-identical copies whose lifetime utility is positive don’t add any value (I wouldn’t pay to create them), but it seems wrong to apply this lockstep-identical copies with negative lifetime utility (I might pay to avoid creating them). It seems obviously worse to create a hundred tortured lockstep copies than to create ten.
One fix that would allow me to act normally would be to add a stipulation to my values that in these kind of weird anthropic scenarios where most of my lockstep copies will die soon (and this is beyond my control), I get utility from taking actions that allow the whichever copies to survive to live good lives. If I decide to undergo the unpleasant experience for my future benefit, even if I have no idea if I’m going to be a surviving copy or not (but am reasonably certain there will be at least some surviving copies), I get utility that counterbalances the unpleasantness.
Obviously such a value would have to be tightly calibrated to avoid generating as crazy behavior as the problem I devised it to solve. It would have to only apply in weird lockstep anthropic scenarios and not inform the rest of my behavior at all. The utility would have to be high enough to counterbalance any dis-utility all of the mes would suffer, but low enough to avoid creating an incentive to create suffering-soon-to-die-lockstep-identical copies. It would also have to avoid creating an incentive for quantum suicide. I think it is possible to fit all these stipulations.
In fact, I’m not sure it’s really a severe modification of my values at all. The idea of doomed mes valiantly struggling to make sure that at least some of them will have decent lives in the future has a certain grandeur to it, like I’m defying fate. It seems like there are far less noble ways to die.
If anyone has a less crazy method of avoiding these dilemma’s though, please, please, please let me know. I like Wei Dai’s idea, but am not sure I understand MWI enough to fully get it. Also, I don’t know if it would apply to the artificially-created copy scenario in addition to the false vacuum one..
*By “lockstep” I mean that the copy will not just start out identical to me. It will have identical experiences to me for the duration of its lifetime. It may have a shorter lifespan than me, but for its duration the experiences will be the same (for instance, a copy of 18 year old me may be created and be deleted after a few days, but until it is deleted it will have the same experiences as 18 year old me did).
Ignore them?
Why do you need answers to these questions, so intensely that being unsure is “making [you] a little sick”? There is no Omega, and he/she/it is not going to show up to create these scenarios. What difference will an answer make to any practical decision in front of you, here and now?
While Omega is not real, it seems possible that naturally occurring things like false vacuum states and Boltzmann brains might be. I think that the possibility those things exist might create similar dilemmas, am disturbed by this fact, and wish to know how to resolve them. I’m pretty much certain there’s no Omega, but I’m not nearly as sure about false vacuums.
I’m reminded of this part of The Moral Void.