I feel extremely embarrassed about asking for help with this, but I have a philosophical quandary that has been eating at me for days. I’m sure that many of you already have it figured out. I would appreciate it if you would lend your cached thoughts to me, because I can’t seem to resolve it.
What has been bothering me are the implications of combining two common views here on Less Wrong. The first is that the Many Worlds Interpretation of Quantum mechanics is correct. The second is that two identical copies of a person count as the same person, and that therefore you haven’t really died if you manage to make another version of yourself that survives while an earlier version dies (for instance, if you sign up for cryonics and then in the future an AI scans your frozen corpse and uses the data to synthesize a new version of you). Robin Hanson has even argued that it would be morally acceptable to create billions of brain emulations of one person for a specific task and then erase them afterward; as long as at least one copy of the emulators remains alive then all you’ve really done is give one person “mild amnesia.” Both these viewpoints seem plausible to me, although I am less sure of Hanson’s rather radical extensions of the second view.
Combing these views has the potential for disturbing implications. If the MWI is correct then there already are large amounts of versions of everyone somewhere out there. This has filled me with the distressing thought that the badness of death might somehow be diminished because of this. I am aware that Eliezer has written articles that seem to explain why this is not the case (“Living in Many Worlds” and “For the People Who are Still Alive,”) but I have read them and am having trouble grasping his arguments.
It seems obvious to me that it is bad to kill someone under normal circumstances, and that the badness of their death does not decrease because there are other parts of the multiverse containing duplicates of them. Eliezer seems to agree, and I think Robin (who has stated he supports the MWI and has contributed to the work on the subject) does too. I very much doubt that if Robin Hanson was greeted by a knife-wielding maniac who announced that he intended to “give mild amnesia to alternate universe versions of Robin and his family” that he would make any less effort to save himself and his family than another version of Robin who did not support the MWI.
On the other hand, the argument that making other versions of yourself before you die is a form of survival seems persuasive to me as well. I think that if cryonics works it might be a form of survival, as would having a brain emulation of yourself made.
The line of reasoning that first pushed me down this disturbing line of thought was a thought experiment I was considering where Omega gives you a choice between:
1. Adding fifty years of your life that you would spend achieving large, important, impressive accomplishments.
2. Adding 200 years to your life that you would spend being stuck in a “time loop,” repeating one (reasonably good) day of your life over and over again, with your memory erased and your surroundings “reset” at the beginning of every morning to ensure you live each day exactly the same.
I concluded that I would probably prefer the first option, even though it was shorter, because a life where you do new things and accomplish goals is better than one endlessly repeated (I would prefer the 200 repetitive years to being killed outright though). This thought experiment led me to conclude that, for many people, a life where one makes progress in one’s life and accomplishes things is better than one where the same experiences are repeated endlessly.
“But wait!” I thought, “We already are sort of living in a time loop! If the MWI is correct than there are countless copies of us all having the exact same experiences repeated endlessly! Does that mean that if killing someone allowed you to lead a more accomplished life, that it would be alright, because all you’d be doing is reducing the amount of repetitions in an already repeating life? This seems obviously wrong to me, there must be something wrong with my analogy.
I have made a list of possible reasons why death might still be bad if copies of you in other worlds survive, but not be as bad if you have made copies of yourself in the same world. I have also made a second list of reasons why the thought experiment I just described isn’t a good analogy to MWI. I’d appreciate it if anyone had any ideas as to whether or not I’m on the right track.
The first list, in order of descending plausibility:
1. I am a moron who doesn’t understand quantum physics or the MWI properly, and if I did understand them properly this conundrum wouldn’t be bothering me.
2. When someone is duplicated through MWI all the relevant factors in their environment (other people, resources, infrastructure, etc.) are duplicated as well. Because of this, the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world. This seems very plausible to me, but I wish I could see a more formal argument for it.
3. The fact that the multiple worlds cannot currently, and probably never will be able to, interact in any significant way, makes it such that the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world. I think this might be what Eliezer was talking about when he said: “I would suggest that you consider every world which is not in your future, to be part of the ‘generalized past.’”, but I’m not sure.
4. 2&3 combined.
5. If the only version of you in a world dies then you cease to be able to impact that world in any way (ie, continue important projects, stay in touch with your loved ones). This is not the case with technological duplicates living in the same world. This seems slightly plausible to me; but it still seems like it would still be wrong to kill someone who had no strong social ties or important projects in life, regardless of how many of them might exist in other worlds.
6. It’s impossible to just kill just one version of a person in the multiverse. Any death in one world will result in a vast amount of other deaths as the worlds continue to diverge.
7. Some kind of quasi-Rawlsian argument where one should try to maximize one’s average wellbeing in the worlds one is “born into.” I think Eliezer might have made such an argument in “For the People Who Are Still Alive.”
8. Survival via making copies is a form of survival, but it’s a crappy type of survival that is inferior to never having the original be destroyed in the first place. It’s sort of like an accident victim losing their legs, it’s good they are alive, and their future life will probably be worth living (at least in a first world country with modern medicine), but it would be a lot better if they survived without losing their legs.
9. It’s good to have as many copies of yourself as possible, so killing one is always bad. This seems implausible to me. If I discovered someone was going to try use technology to run off a large amount of versions of themselves, and stole large amounts of resources to do so and radically decreased the quality of other people’s lives, then it would be right to stop them. Also, it seems to me that if I could spend money to duplicate myself I would devote some money to that, but devote some other money to enriching the lives of existing copies.
The second list (as to why my “Timeloop” thought experiment isn’t analogous to MWI).
1. Again, it’s impossible to just kill just one version of a person in the multiverse. Killing someone to improve your life in Many Worlds would be like having Omega stick two people in a “Time Loop,” and then have one kill the other at the start of every morning.
2. One life of accomplishment is better than one repetitive life, but it isn’t better than two repetitive lives.
3. Prioritarianism is closer to the Meaning of Right than pure utilitarianism, so ever if one life of accomplishment is better than two repetitive lives, it’s still wrong to kill someone to improve your own life.
Again, I would really appreciate it if someone could explain this for me. This problem has really been upsetting me. I have trouble focusing on other things and its negatively affecting my mood. I know that the fact that I can be affected so severely by an abstract issue is probably a sign of deeper psychological problems that I should have looked at. But I think realizing that this problem isn’t really a problem, and that it all adds up to normality, is a good first step.
Dying in Many Worlds
I feel extremely embarrassed about asking for help with this, but I have a philosophical quandary that has been eating at me for days. I’m sure that many of you already have it figured out. I would appreciate it if you would lend your cached thoughts to me, because I can’t seem to resolve it.
What has been bothering me are the implications of combining two common views here on Less Wrong. The first is that the Many Worlds Interpretation of Quantum mechanics is correct. The second is that two identical copies of a person count as the same person, and that therefore you haven’t really died if you manage to make another version of yourself that survives while an earlier version dies (for instance, if you sign up for cryonics and then in the future an AI scans your frozen corpse and uses the data to synthesize a new version of you). Robin Hanson has even argued that it would be morally acceptable to create billions of brain emulations of one person for a specific task and then erase them afterward; as long as at least one copy of the emulators remains alive then all you’ve really done is give one person “mild amnesia.” Both these viewpoints seem plausible to me, although I am less sure of Hanson’s rather radical extensions of the second view.
Combing these views has the potential for disturbing implications. If the MWI is correct then there already are large amounts of versions of everyone somewhere out there. This has filled me with the distressing thought that the badness of death might somehow be diminished because of this. I am aware that Eliezer has written articles that seem to explain why this is not the case (“Living in Many Worlds” and “For the People Who are Still Alive,”) but I have read them and am having trouble grasping his arguments.
It seems obvious to me that it is bad to kill someone under normal circumstances, and that the badness of their death does not decrease because there are other parts of the multiverse containing duplicates of them. Eliezer seems to agree, and I think Robin (who has stated he supports the MWI and has contributed to the work on the subject) does too. I very much doubt that if Robin Hanson was greeted by a knife-wielding maniac who announced that he intended to “give mild amnesia to alternate universe versions of Robin and his family” that he would make any less effort to save himself and his family than another version of Robin who did not support the MWI.
On the other hand, the argument that making other versions of yourself before you die is a form of survival seems persuasive to me as well. I think that if cryonics works it might be a form of survival, as would having a brain emulation of yourself made.
The line of reasoning that first pushed me down this disturbing line of thought was a thought experiment I was considering where Omega gives you a choice between:
1. Adding fifty years of your life that you would spend achieving large, important, impressive accomplishments.
2. Adding 200 years to your life that you would spend being stuck in a “time loop,” repeating one (reasonably good) day of your life over and over again, with your memory erased and your surroundings “reset” at the beginning of every morning to ensure you live each day exactly the same.
I concluded that I would probably prefer the first option, even though it was shorter, because a life where you do new things and accomplish goals is better than one endlessly repeated (I would prefer the 200 repetitive years to being killed outright though). This thought experiment led me to conclude that, for many people, a life where one makes progress in one’s life and accomplishes things is better than one where the same experiences are repeated endlessly.
“But wait!” I thought, “We already are sort of living in a time loop! If the MWI is correct than there are countless copies of us all having the exact same experiences repeated endlessly! Does that mean that if killing someone allowed you to lead a more accomplished life, that it would be alright, because all you’d be doing is reducing the amount of repetitions in an already repeating life? This seems obviously wrong to me, there must be something wrong with my analogy.
I have made a list of possible reasons why death might still be bad if copies of you in other worlds survive, but not be as bad if you have made copies of yourself in the same world. I have also made a second list of reasons why the thought experiment I just described isn’t a good analogy to MWI. I’d appreciate it if anyone had any ideas as to whether or not I’m on the right track.
The first list, in order of descending plausibility:
1. I am a moron who doesn’t understand quantum physics or the MWI properly, and if I did understand them properly this conundrum wouldn’t be bothering me.
2. When someone is duplicated through MWI all the relevant factors in their environment (other people, resources, infrastructure, etc.) are duplicated as well. Because of this, the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world. This seems very plausible to me, but I wish I could see a more formal argument for it.
3. The fact that the multiple worlds cannot currently, and probably never will be able to, interact in any significant way, makes it such that the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world. I think this might be what Eliezer was talking about when he said: “I would suggest that you consider every world which is not in your future, to be part of the ‘generalized past.’”, but I’m not sure.
4. 2&3 combined.
5. If the only version of you in a world dies then you cease to be able to impact that world in any way (ie, continue important projects, stay in touch with your loved ones). This is not the case with technological duplicates living in the same world. This seems slightly plausible to me; but it still seems like it would still be wrong to kill someone who had no strong social ties or important projects in life, regardless of how many of them might exist in other worlds.
6. It’s impossible to just kill just one version of a person in the multiverse. Any death in one world will result in a vast amount of other deaths as the worlds continue to diverge.
7. Some kind of quasi-Rawlsian argument where one should try to maximize one’s average wellbeing in the worlds one is “born into.” I think Eliezer might have made such an argument in “For the People Who Are Still Alive.”
8. Survival via making copies is a form of survival, but it’s a crappy type of survival that is inferior to never having the original be destroyed in the first place. It’s sort of like an accident victim losing their legs, it’s good they are alive, and their future life will probably be worth living (at least in a first world country with modern medicine), but it would be a lot better if they survived without losing their legs.
9. It’s good to have as many copies of yourself as possible, so killing one is always bad. This seems implausible to me. If I discovered someone was going to try use technology to run off a large amount of versions of themselves, and stole large amounts of resources to do so and radically decreased the quality of other people’s lives, then it would be right to stop them. Also, it seems to me that if I could spend money to duplicate myself I would devote some money to that, but devote some other money to enriching the lives of existing copies.
The second list (as to why my “Timeloop” thought experiment isn’t analogous to MWI).
1. Again, it’s impossible to just kill just one version of a person in the multiverse. Killing someone to improve your life in Many Worlds would be like having Omega stick two people in a “Time Loop,” and then have one kill the other at the start of every morning.
2. One life of accomplishment is better than one repetitive life, but it isn’t better than two repetitive lives.
3. Prioritarianism is closer to the Meaning of Right than pure utilitarianism, so ever if one life of accomplishment is better than two repetitive lives, it’s still wrong to kill someone to improve your own life.
Again, I would really appreciate it if someone could explain this for me. This problem has really been upsetting me. I have trouble focusing on other things and its negatively affecting my mood. I know that the fact that I can be affected so severely by an abstract issue is probably a sign of deeper psychological problems that I should have looked at. But I think realizing that this problem isn’t really a problem, and that it all adds up to normality, is a good first step.