Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.
Gradual change feels OK. Less gradual change feels less OK (unless I percieve it as an improvement). Going to sleep at night and knowing that in the morning I will feel a bit differently already makes me nervous. But it’s preferable to dying. (But if I could avoid it without bad consequences, I would.) Small changes are good, because more of the future selves will be more similar to my current self.
How exactly does one measure the similarity or the change? Some parts of myself seem more important to me than other parts. Maybe if I make a matrix of how much some parts of me influence other parts of me, it would be some eigenvector. A part of me that strongly influences other parts of me, is more me. You can change my taste for ice cream and the rest of the personality remains pretty much the same, so I would be willing to sacrifice the taste for ice cream for survival or even some minor benefit of the remaining parts.
Would making 2 copies of me double my utility? Well, it depends of what “my” utility means in this context. Neither of the copies would perceive the utility of the other copy, so neither of them would feel like anything has doubled. But there would be 2 people having the utility of being alive, so globally there would be twice as much utility for my future selves, just not twice the utility for any particular self. (Just like dying in 50% of Everett branches does not mean that there is a self which feels only 50% alive.)
If I had an opportunity to copy myself, assuming that each copy will have the same quality of life as I have now, I would do it. If I would have to pay for making the copy… I don’t know how much would I be willing to pay. (Also money is not exactly the same thing as utility, so if I’d have to split my property in two halves—a half for each copy—it would be worth it, because each copy would get more than 50% of utility.)
Without cryonics or uploading or some other immortality treatment, selves die. Pretending that you continue to live in a galaxy far far away is like pretending that you continue to live in heaven. And by the way, that’s not a cheap analogy, because in Tegmark universe there exists a heaven where you will be after you die, assuming “heaven” = a place where you get 3^^^3 utility, and “you, after you die” = a particle-level copy of you in the moment of your death, except that in heaven you will survive. And this is not a good news, because hell exists too, and maybe a random afterlife is more like hell than heaven, in which case a prolonged life in our universe is preferable.
Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.
Let us suppose you are forced to play a quantum roulette (assume the payoffs are along the lines of those described here). The next day, someone asked you whether you were glad that you were forced to play quantum roulette. Do you answer:
NO! I just 50% died! Those @%#%@ assholes!
Yes! I just got $300k for free!
I ask because while from the perspective of evaluating expected payoffs in the future your two assertions are compatible but from the perspective of evaluating outcomes that have happened they are not.
I guess my answer would be like: “It’s too bad they 50% killed me… but now, I’m not going to cry for my parallel dead bodies (they are a sunk cost) and I’ll enjoy the money.” So I would be both happy that I am in the winning branch, but also aware of the cost, so I would not be retrospectively happy about being forced to play.
Does this make sense? I would be both happy and unhappy about two different aspects of the situation. The part that makes me worry about the death of non-me’s is that they were killed by something that was a threat for me too. (Something like when terrorists capture 16 prisoners and kill 15 of them and you are the one they release, and then somehow illogically they also pay you a lot of money. They did not kill you, and you even profited from the action, but that was not a personal decision on their side, just a random choice. So in some sense, they wanted to kill you too, and almost succeeded.)
The answer is that I just had a 50% chance of dying. Assholes.
That actually isn’t an answer to the clarification I asked of Viliam. If you (are sane and so) consider quantum roulette undesirable then naturally you consider the folks who forced it upon you to be assholes. Yet you are the (measure of the) guy that won the lottery, didn’t die and got the $300k. If Viliam considers parallel copies not-me then after the coin is tossed he doesn’t care (in the direct personal sense) about the other ‘not-me’ guy who lost and got killed.
Mind you the language around this subject is ambiguous. It could be that Viliam’s expression was intended to place parrallel-everett-branch selves into a qualitatively different class to other forms of parallel selves.
And by the way, that’s not a cheap analogy, because in Tegmark universe there exists a heaven where you will be after you die, assuming “heaven” = a place where you get 3^^^3 utility
Curiously, for all the enormous scope of the higher level Tegmark multiverses this isn’t necessarily the case. The evaluation of utility is determined by an extrapolation from you. If you are the kind of person that does not have an unbounded utility function it is entirely possible that “heaven” does not exist even Tegmark’s level IV ultimate ensemble. It would require going beyond that, to universes that aren’t mathematically possible.
Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.
Gradual change feels OK. Less gradual change feels less OK (unless I percieve it as an improvement). Going to sleep at night and knowing that in the morning I will feel a bit differently already makes me nervous. But it’s preferable to dying. (But if I could avoid it without bad consequences, I would.) Small changes are good, because more of the future selves will be more similar to my current self.
How exactly does one measure the similarity or the change? Some parts of myself seem more important to me than other parts. Maybe if I make a matrix of how much some parts of me influence other parts of me, it would be some eigenvector. A part of me that strongly influences other parts of me, is more me. You can change my taste for ice cream and the rest of the personality remains pretty much the same, so I would be willing to sacrifice the taste for ice cream for survival or even some minor benefit of the remaining parts.
Would making 2 copies of me double my utility? Well, it depends of what “my” utility means in this context. Neither of the copies would perceive the utility of the other copy, so neither of them would feel like anything has doubled. But there would be 2 people having the utility of being alive, so globally there would be twice as much utility for my future selves, just not twice the utility for any particular self. (Just like dying in 50% of Everett branches does not mean that there is a self which feels only 50% alive.)
If I had an opportunity to copy myself, assuming that each copy will have the same quality of life as I have now, I would do it. If I would have to pay for making the copy… I don’t know how much would I be willing to pay. (Also money is not exactly the same thing as utility, so if I’d have to split my property in two halves—a half for each copy—it would be worth it, because each copy would get more than 50% of utility.)
Without cryonics or uploading or some other immortality treatment, selves die. Pretending that you continue to live in a galaxy far far away is like pretending that you continue to live in heaven. And by the way, that’s not a cheap analogy, because in Tegmark universe there exists a heaven where you will be after you die, assuming “heaven” = a place where you get 3^^^3 utility, and “you, after you die” = a particle-level copy of you in the moment of your death, except that in heaven you will survive. And this is not a good news, because hell exists too, and maybe a random afterlife is more like hell than heaven, in which case a prolonged life in our universe is preferable.
Let us suppose you are forced to play a quantum roulette (assume the payoffs are along the lines of those described here). The next day, someone asked you whether you were glad that you were forced to play quantum roulette. Do you answer:
NO! I just 50% died! Those @%#%@ assholes!
Yes! I just got $300k for free!
I ask because while from the perspective of evaluating expected payoffs in the future your two assertions are compatible but from the perspective of evaluating outcomes that have happened they are not.
I guess my answer would be like: “It’s too bad they 50% killed me… but now, I’m not going to cry for my parallel dead bodies (they are a sunk cost) and I’ll enjoy the money.” So I would be both happy that I am in the winning branch, but also aware of the cost, so I would not be retrospectively happy about being forced to play.
Does this make sense? I would be both happy and unhappy about two different aspects of the situation. The part that makes me worry about the death of non-me’s is that they were killed by something that was a threat for me too. (Something like when terrorists capture 16 prisoners and kill 15 of them and you are the one they release, and then somehow illogically they also pay you a lot of money. They did not kill you, and you even profited from the action, but that was not a personal decision on their side, just a random choice. So in some sense, they wanted to kill you too, and almost succeeded.)
Thanks, the illustration with the terrorists nailed the meaning down.
The answer is that I just had a 50% chance of dying. Assholes.
This should be pretty obvious considering the pile of corpses the quantum murderers are leaving behind as they repeat their game.
That actually isn’t an answer to the clarification I asked of Viliam. If you (are sane and so) consider quantum roulette undesirable then naturally you consider the folks who forced it upon you to be assholes. Yet you are the (measure of the) guy that won the lottery, didn’t die and got the $300k. If Viliam considers parallel copies not-me then after the coin is tossed he doesn’t care (in the direct personal sense) about the other ‘not-me’ guy who lost and got killed.
Mind you the language around this subject is ambiguous. It could be that Viliam’s expression was intended to place parrallel-everett-branch selves into a qualitatively different class to other forms of parallel selves.
I see what you are saying now. Thanks for clarifying.
Glad to hear it. I hope my inclusion of parenthetical ‘sanity’ claims conveyed that I essentially agree with what you were saying too.
Yea, that helped.
Curiously, for all the enormous scope of the higher level Tegmark multiverses this isn’t necessarily the case. The evaluation of utility is determined by an extrapolation from you. If you are the kind of person that does not have an unbounded utility function it is entirely possible that “heaven” does not exist even Tegmark’s level IV ultimate ensemble. It would require going beyond that, to universes that aren’t mathematically possible.