“My curiosity doesn’t suddenly go away just because there’s no reality, you know!”
Eliezer, I want to high-five you.
Does this “Many worlds” thing imply that there exists (in some meaningful sense) other worlds alongside us where whatever quantum events didn’t happen here happened? (If not, or if this is a wrong question, disregard the following.)
What are the moral implications? If some dictator says “If this photon passes through this filter (which it can do with probability 0.5), I will torture you all; if it is absorbed, I will do something vaguely nice.”, and the photon if absorbed, should we rejoice, or should we grieve for those people in another world who are tortured?
Should we try quantum suicide? I think I’m willing to die (at least once, but maybe not in a lot of worlds, my poor little brain can’t grasp the concept of multiple deaths) to let one world know whether the MWI is true.
What about other events? A coinflip isn’t really a quantum random event (and may even be not random at all if you know enough), but the coin is made out of amplitudes—are there worlds where the coin lands on the other side? We won WW2 by the skin of the teeth, are there any worlds where the Earth is ruled by Nazi Germany?
Disclaimer: I don’t understand QM on a formal level. But here’s what I got out of reading the Sequences and other LW discussions on the subject.
Does this “Many worlds” thing imply that there exists (in some meaningful sense) other worlds alongside us
They exist, in a special sense of the word. Instead of arguing about definitions of existence, measure of reality, etc., let’s talk about the experimental consequences. Which are: you’re not going to interact with them ever again. They exist at most as much as people in our own branch who are outside our Hubble radius.
Should you still grieve for them? That’s for you to decide, but I do make a suggestion: grief is in part a useful adaptation. It may help motivate you to prevent more future grief. If you cannot prevent future grief-causing events (because quantum torture branches will always keep splitting off, and to the extent you cannot influence their measure), then that grief is useless. Eliminating it (not grieving) makes you better off and no-one else worse off, so in such cases I suggest you do not grieve.
Should we try quantum suicide?
Again, there may well be good quantum theoretical arguments against quantum suicide. But here’s a more practical one. Suppose it works. It has been suggested that it in the vast majority of the branches in which you survive, you do not survive unscathed: you survive hurt, reduced, as an invalid, etc. If you rig up a gun to shoot you, there are some branches where it fails to shoot entirely, but there are many more branches where it misses just enough that you live on as a cripple. Quantum suicide is dangerous like an outcome pump.
are there any worlds where the Earth is ruled by Nazi Germany?
In principle, any world whose past evolution does not contradict the laws of physics exists as a branch.
Most people try to avoid the unpleasant implications by assigning significance to the weight of those branches. I find this a bit problematic when applied to branches that are not in our future: the Born probabilities govern the branch we expect to witness, but we don’t understand why or how, so why should we say they govern some “reality measure” of branches we cannot interact with?
should we rejoice, or should we grieve for those people in another world who are tortured?
People were, in fact, tortured. You can grieve for them if you wish.
Should we try quantum suicide?
That is also a question of how branching world-lines work.
I’d say no. Identity is an illusion. Everyone only exists for an instant, and a “person” is actually a world-line composed of tons of different people who all think they’re the same person. If you perform the experiment, there will be fewer people who think they’re you.
are there any worlds where the Earth is ruled by Nazi Germany?
Every world exists, but some exist more than others. Don’t take that at face value. All it means is that not all of the worlds are equally likely. I have no idea why. Just rest assured that the other worlds exist somehow.
People were, in fact, tortured. You can grieve for them if you wish.
If you grieve for everyone tortured in every branch not your own, not singling out your own branch for special treatment out of the literal infinity of branches, then I understand you have your work cut out for you just managing the mathematical infinities involved to specify a utility function. (The solutions I’ve seen all start by putting in the desired conclusion as an arbitrary assumption.)
No-one can or will mourn literally infinite people. (Even if you ignore people in other branches, what about people in our own in case our universe is spatially infinite and everything possible happens infinitely many times?) This is not how mourning works in humans.
You can mourn the general fact that suffering happens, without letting the (probably infinite) amounts of it directly establish the amount of mourning done. It wouldn’t be productive in any sense, because in a universe where everything happens somewhere—whether via quantum branches or sheer size or both—you can’t reduce the suffering, it’ll always be infinite. So mourning in this case does not serve any purpose; I would wish to stop feeling such mourning if I felt it.
Just rest assured that the other worlds exist somehow.
And that you cannot interact with them ever again and therefore should not mourn them.
And that you cannot interact with them ever again and therefore should not mourn them.
If people leave on a spaceship to colonize another galaxy, and between their speed and the expansion of the universe it is physically impossible to interact with them ever again, surely they still have moral weight. If the spaceship company had constructed the spaceship to collapse the moment they could no longer ever interact with us, to cut costs, then surely when we discovered this from their internal documents we would prosecute them as criminals, even though the consequences of their crime occurred somewhere as fundamentally separate from us as another world.
I don’t think you have, in your morality, an exception for everyone who is causally isolated from you.
You’re just stating your conclusion again. Such a moral belief is possible, but it’s a choice. I choose not to care morally about people I cannot even in principle interact with.
then surely when we discovered this from their internal documents we would prosecute them as criminals
Note that punishment for crime isn’t the same as grief, and works on different rules.
Why punish people? To reduce future similar crime. (I don’t accept moral propositions of punishment for punishment’s sake.) I could board such a ship in the future myself, and would not wish it to be sabotaged. So I want these saboteurs to be punished to deter future crime.
Here’s another reasoning for the same conclusion: their action reduced the (expected) utility of the people on the ship while they were still in contact with us. We just didn’t find out about it until later. This is analogous to a case where we discover that two years ago, Jane wounded Alex. We know that a year ago, Alex died from unrelated causes. We still want to punish Jane today even though Alex cannot be reimbursed himself anymore.
I don’t think you have, in your morality, an exception for everyone who is causally isolated from you.
My morality comes from two main sources. One is how I feel (due to nature and nurture): such as grief. Sometimes I find this is not how I want to feel, and then I try to change myself—as I would with any other feelings. So if I discovered myself grieving for people outside my universe, I would try to stop doing so.
Luckily I, like most people I think, don’t grieve for such people: grief falls off rapidly for more distant suffering (in space and/or time). People outside the future light cone, or in other quantum branches, are as far as they can be from me and still exist in some sense.
The second source of my morality is practical ethics: how do I want to behave, and want others to behave, to achieve certain things? Here too, grieving or expending any other resource (time, effort, thought) on people I cannot interact with doesn’t benefit me or them or anyone else, so I would prefer not to do it.
Can you clarify why you choose to grieve for people at all?
I mean, you seem to classify grieving as an example of expending resources on someone. So if person A dies and person B grieves, B is expending resources on someone. Who benefits from those resources? It certainly isn’t A; A is dead.
There seem to be a number of possibilities.
1) Nobody actually benefits from those resources being expended. In which case your reasoning seems to equally well reject all grief, not just grief over hypothetical superluminal travellers.
2) Some surviving person benefits from B’s grief… maybe B themselves, maybe A’s family members, maybe somebody else. In this case rejecting grieving for A may have costs, and perhaps those potential costs should be understood before rejecting it.
3) A benefited, while alive, from the fact that B runs algorithms that reliably result in B grieving for A once A is dead. In this case rejecting grieving for A may have the consequence of also rejecting those algorithms, which would perhaps otherwise have been beneficial to someone in the same way that were in the past beneficial to A. Here again, perhaps those potential costs should be understood before rejecting grief.
A combination of all three options is true; I don’t know of a fourth. Grief is mostly a waste because there’s more of it than I’d like (option 1), but also helps to prevent future causes of grief (option 3) and possibly helps the griever cope (option 2).
I see grief as analogous to pain. It’s an evolved response. Its primary function is conditioning by negative reinforcement. To avoid grief, people try to prevent grief-causing situations, e.g. protecting their loved ones more. Just as with pain, we have to live with grief today but we may wish to self-modify to grieve less.
Because it’s an evolved mechanism, it tends to be entangled with other processes; thus it is claimed to have a secondary purpose—to help with “healthy psychological coping” of the grieving person in accepting reality. I’ve heard this claim but have not looked into its sources and don’t have a good estimation of how true or important this is.
I suffer from experiencing grief a lot more than I am willing to suffer in order to get these benefits. If it was just a matter of choice, I would choose to grieve a lot less or maybe not at all, in all situations. That would require a level of modification of my psychology that would also enable me to get the above benefits without grieving. In reality I don’t have that level of control.
However, we do have some control over how much we grieve. In particular, grieving for very distant people seems to be off-by-default in most people, and only activated by deliberate thinking about those distant people; i.e. this kind of grief may be avoided a lot of the time. It also happens to be the kind of grief where the above benefits are least (or nonexistent). So of course I focus my efforts and advise others to practice grieving less first of all in such circumstances.
Note: “grief” can be read broadly, as in “feeling sad through empathy with suffering distant others”.
Given this, I am very confused by what you think is special about the esoteric possibilities you discuss with alex_zag_al above.
That is, given my understanding of your position, it seems you should reject or endorse grieving over those doomed intergalactic explorers to basically the same degree that you would either reject or endorse grieving over a boat full of tourists who drown on their way to Greece. (I’m not really sure what degree that is… what I get from your explanation is that you endorse some amount of grief, but not as much of it as people actually demonstrate.)
Does it matter at all that they’re in a spaceship etc. etc. etc.? Or does that just happen to be the example under discussion?
It matters that I’m not going to interact with them again (or with their dead bodies). For people who are still entangled with me, like tourists in Greece, I allow more grief because in principle my grief (and by TDT-like reasoning, the grief of others) may help prevent other drowning accidents in the future. But you’re right that the actual grief I experience in practice for tourists drowning in Greece is for practical purposes zero.
The example of a spaceship is esoteric; I wasn’t the one who chose it, but I responded to people discussing exotic propositions like grieving for “acausal” people like those in other quantum branches. I can’t even afford to grieve for everyone who suffers on this Earth, in my own branch − 150,000 people die daily and I haven’t got that much grief to spend even if I tried to grieve as much as possible (which I don’t want to).
No. Well actually there are semi defensible scenarios where the Nazis could have won Europe but they’re extremely unlikely. I was going to answer just “No.” but the following factors suggest an even more freakishly lucky Nazi regime could have beaten the Soviet Union and the war in Europe was in reality the Soviets versus the Nazis because that conflict was existential once it began.
Evidence the Soviets could have lost
At one stage over 100% of Soviet GDP was going to the military.
The Soviets came quite close to losing 10% of their male population in WW2.
The GDP point means that all non-military economic activity was being supported by external subsidy, i.e. the USA and lendlease. The casualties as a percentage of population are suggestive because it took killing 30% of the male population to convert the Afghans to Islam and this seems a reasonable upper bound on the proportion of a population you need to kill to make a cultural change in a non state society permanently at war. Any more complex society, like the Soviets had will be less robust than that.
But in all seriousness the Soviets could have won the war without the British Empire or Americans committing combat troops, maybe not without economic support. The Nazis were not getting nukes so once anyone on the other side did they were doomed. The Japanese were doomed absent extraterrestrial intervention. I mean that literally. If a meteorite of sufficient size had landed on a major US city maybe the US would have pulled out. Otherwise the Japanese were fucked from the word go. The Italians are irrelevant.
For the easiest data point against the possibility of WW2 being lost by the Allies consider this; the Allies had over 50% of World GDP and had integrated battlegroups, command and control and economic planning. The Nazis had the Italians for allies and could not meaningfully link up with the Japanese.
Absent rocks from space the maximal surviving Nazi state is one of
Hitler dies after annexing Czechoslovakia.
Hitler dies after dividing Poland with Stalin.
I don’t think Stalin would have started a war with a post Hitler Nazi regime so scenario 2 is plausible but scenario 1 is overwhelmingly probable.
My personal favorite theory is that the cold war was quantum suicide on a species-wide level. Since you seem versed in history: seen in counterfactual retrospective, how likely was our survival?
I amn’t that well versed in history but if we could somehow check all branches after V-J Day nuclear weapons being used in anger by one or both sides in 1⁄4 of them would not surprise me. Do keep in mind that it was the 80s before nuclear war would be civilisation ending. Europe and the Soviet Union were toast given nuclear war from ’50 maybe, North America had to wait for ICBMs to be screwed given nuclear war and I can’t remember if it was Brazil or Australia that were the last places to be targetted by civilisation ending numbers of bombs.
Quantam species suicide I doubt. By the time we could end civilisation the Soviet union was a gerontocracy, albeit one that truly thought the US was an existential enemy when it was barely an enemy. but hey, Stanislav Petrov. I don’t know.
“My curiosity doesn’t suddenly go away just because there’s no reality, you know!” Eliezer, I want to high-five you.
Does this “Many worlds” thing imply that there exists (in some meaningful sense) other worlds alongside us where whatever quantum events didn’t happen here happened? (If not, or if this is a wrong question, disregard the following.)
What are the moral implications? If some dictator says “If this photon passes through this filter (which it can do with probability 0.5), I will torture you all; if it is absorbed, I will do something vaguely nice.”, and the photon if absorbed, should we rejoice, or should we grieve for those people in another world who are tortured?
Should we try quantum suicide? I think I’m willing to die (at least once, but maybe not in a lot of worlds, my poor little brain can’t grasp the concept of multiple deaths) to let one world know whether the MWI is true.
What about other events? A coinflip isn’t really a quantum random event (and may even be not random at all if you know enough), but the coin is made out of amplitudes—are there worlds where the coin lands on the other side? We won WW2 by the skin of the teeth, are there any worlds where the Earth is ruled by Nazi Germany?
Disclaimer: I don’t understand QM on a formal level. But here’s what I got out of reading the Sequences and other LW discussions on the subject.
They exist, in a special sense of the word. Instead of arguing about definitions of existence, measure of reality, etc., let’s talk about the experimental consequences. Which are: you’re not going to interact with them ever again. They exist at most as much as people in our own branch who are outside our Hubble radius.
Should you still grieve for them? That’s for you to decide, but I do make a suggestion: grief is in part a useful adaptation. It may help motivate you to prevent more future grief. If you cannot prevent future grief-causing events (because quantum torture branches will always keep splitting off, and to the extent you cannot influence their measure), then that grief is useless. Eliminating it (not grieving) makes you better off and no-one else worse off, so in such cases I suggest you do not grieve.
Again, there may well be good quantum theoretical arguments against quantum suicide. But here’s a more practical one. Suppose it works. It has been suggested that it in the vast majority of the branches in which you survive, you do not survive unscathed: you survive hurt, reduced, as an invalid, etc. If you rig up a gun to shoot you, there are some branches where it fails to shoot entirely, but there are many more branches where it misses just enough that you live on as a cripple. Quantum suicide is dangerous like an outcome pump.
In principle, any world whose past evolution does not contradict the laws of physics exists as a branch.
Most people try to avoid the unpleasant implications by assigning significance to the weight of those branches. I find this a bit problematic when applied to branches that are not in our future: the Born probabilities govern the branch we expect to witness, but we don’t understand why or how, so why should we say they govern some “reality measure” of branches we cannot interact with?
People were, in fact, tortured. You can grieve for them if you wish.
That is also a question of how branching world-lines work.
I’d say no. Identity is an illusion. Everyone only exists for an instant, and a “person” is actually a world-line composed of tons of different people who all think they’re the same person. If you perform the experiment, there will be fewer people who think they’re you.
Every world exists, but some exist more than others. Don’t take that at face value. All it means is that not all of the worlds are equally likely. I have no idea why. Just rest assured that the other worlds exist somehow.
If you grieve for everyone tortured in every branch not your own, not singling out your own branch for special treatment out of the literal infinity of branches, then I understand you have your work cut out for you just managing the mathematical infinities involved to specify a utility function. (The solutions I’ve seen all start by putting in the desired conclusion as an arbitrary assumption.)
No-one can or will mourn literally infinite people. (Even if you ignore people in other branches, what about people in our own in case our universe is spatially infinite and everything possible happens infinitely many times?) This is not how mourning works in humans.
You can mourn the general fact that suffering happens, without letting the (probably infinite) amounts of it directly establish the amount of mourning done. It wouldn’t be productive in any sense, because in a universe where everything happens somewhere—whether via quantum branches or sheer size or both—you can’t reduce the suffering, it’ll always be infinite. So mourning in this case does not serve any purpose; I would wish to stop feeling such mourning if I felt it.
And that you cannot interact with them ever again and therefore should not mourn them.
If people leave on a spaceship to colonize another galaxy, and between their speed and the expansion of the universe it is physically impossible to interact with them ever again, surely they still have moral weight. If the spaceship company had constructed the spaceship to collapse the moment they could no longer ever interact with us, to cut costs, then surely when we discovered this from their internal documents we would prosecute them as criminals, even though the consequences of their crime occurred somewhere as fundamentally separate from us as another world.
I don’t think you have, in your morality, an exception for everyone who is causally isolated from you.
You’re just stating your conclusion again. Such a moral belief is possible, but it’s a choice. I choose not to care morally about people I cannot even in principle interact with.
Note that punishment for crime isn’t the same as grief, and works on different rules.
Why punish people? To reduce future similar crime. (I don’t accept moral propositions of punishment for punishment’s sake.) I could board such a ship in the future myself, and would not wish it to be sabotaged. So I want these saboteurs to be punished to deter future crime.
Here’s another reasoning for the same conclusion: their action reduced the (expected) utility of the people on the ship while they were still in contact with us. We just didn’t find out about it until later. This is analogous to a case where we discover that two years ago, Jane wounded Alex. We know that a year ago, Alex died from unrelated causes. We still want to punish Jane today even though Alex cannot be reimbursed himself anymore.
My morality comes from two main sources. One is how I feel (due to nature and nurture): such as grief. Sometimes I find this is not how I want to feel, and then I try to change myself—as I would with any other feelings. So if I discovered myself grieving for people outside my universe, I would try to stop doing so.
Luckily I, like most people I think, don’t grieve for such people: grief falls off rapidly for more distant suffering (in space and/or time). People outside the future light cone, or in other quantum branches, are as far as they can be from me and still exist in some sense.
The second source of my morality is practical ethics: how do I want to behave, and want others to behave, to achieve certain things? Here too, grieving or expending any other resource (time, effort, thought) on people I cannot interact with doesn’t benefit me or them or anyone else, so I would prefer not to do it.
Can you clarify why you choose to grieve for people at all?
I mean, you seem to classify grieving as an example of expending resources on someone. So if person A dies and person B grieves, B is expending resources on someone. Who benefits from those resources? It certainly isn’t A; A is dead.
There seem to be a number of possibilities.
1) Nobody actually benefits from those resources being expended. In which case your reasoning seems to equally well reject all grief, not just grief over hypothetical superluminal travellers.
2) Some surviving person benefits from B’s grief… maybe B themselves, maybe A’s family members, maybe somebody else. In this case rejecting grieving for A may have costs, and perhaps those potential costs should be understood before rejecting it.
3) A benefited, while alive, from the fact that B runs algorithms that reliably result in B grieving for A once A is dead. In this case rejecting grieving for A may have the consequence of also rejecting those algorithms, which would perhaps otherwise have been beneficial to someone in the same way that were in the past beneficial to A. Here again, perhaps those potential costs should be understood before rejecting grief.
Is there a fourth option?
A combination of all three options is true; I don’t know of a fourth. Grief is mostly a waste because there’s more of it than I’d like (option 1), but also helps to prevent future causes of grief (option 3) and possibly helps the griever cope (option 2).
I see grief as analogous to pain. It’s an evolved response. Its primary function is conditioning by negative reinforcement. To avoid grief, people try to prevent grief-causing situations, e.g. protecting their loved ones more. Just as with pain, we have to live with grief today but we may wish to self-modify to grieve less.
Because it’s an evolved mechanism, it tends to be entangled with other processes; thus it is claimed to have a secondary purpose—to help with “healthy psychological coping” of the grieving person in accepting reality. I’ve heard this claim but have not looked into its sources and don’t have a good estimation of how true or important this is.
I suffer from experiencing grief a lot more than I am willing to suffer in order to get these benefits. If it was just a matter of choice, I would choose to grieve a lot less or maybe not at all, in all situations. That would require a level of modification of my psychology that would also enable me to get the above benefits without grieving. In reality I don’t have that level of control.
However, we do have some control over how much we grieve. In particular, grieving for very distant people seems to be off-by-default in most people, and only activated by deliberate thinking about those distant people; i.e. this kind of grief may be avoided a lot of the time. It also happens to be the kind of grief where the above benefits are least (or nonexistent). So of course I focus my efforts and advise others to practice grieving less first of all in such circumstances.
Note: “grief” can be read broadly, as in “feeling sad through empathy with suffering distant others”.
Given this, I am very confused by what you think is special about the esoteric possibilities you discuss with alex_zag_al above.
That is, given my understanding of your position, it seems you should reject or endorse grieving over those doomed intergalactic explorers to basically the same degree that you would either reject or endorse grieving over a boat full of tourists who drown on their way to Greece. (I’m not really sure what degree that is… what I get from your explanation is that you endorse some amount of grief, but not as much of it as people actually demonstrate.)
Does it matter at all that they’re in a spaceship etc. etc. etc.? Or does that just happen to be the example under discussion?
It matters that I’m not going to interact with them again (or with their dead bodies). For people who are still entangled with me, like tourists in Greece, I allow more grief because in principle my grief (and by TDT-like reasoning, the grief of others) may help prevent other drowning accidents in the future. But you’re right that the actual grief I experience in practice for tourists drowning in Greece is for practical purposes zero.
The example of a spaceship is esoteric; I wasn’t the one who chose it, but I responded to people discussing exotic propositions like grieving for “acausal” people like those in other quantum branches. I can’t even afford to grieve for everyone who suffers on this Earth, in my own branch − 150,000 people die daily and I haven’t got that much grief to spend even if I tried to grieve as much as possible (which I don’t want to).
We did?
No. Well actually there are semi defensible scenarios where the Nazis could have won Europe but they’re extremely unlikely. I was going to answer just “No.” but the following factors suggest an even more freakishly lucky Nazi regime could have beaten the Soviet Union and the war in Europe was in reality the Soviets versus the Nazis because that conflict was existential once it began.
Evidence the Soviets could have lost
At one stage over 100% of Soviet GDP was going to the military.
The Soviets came quite close to losing 10% of their male population in WW2.
The GDP point means that all non-military economic activity was being supported by external subsidy, i.e. the USA and lendlease. The casualties as a percentage of population are suggestive because it took killing 30% of the male population to convert the Afghans to Islam and this seems a reasonable upper bound on the proportion of a population you need to kill to make a cultural change in a non state society permanently at war. Any more complex society, like the Soviets had will be less robust than that.
But in all seriousness the Soviets could have won the war without the British Empire or Americans committing combat troops, maybe not without economic support. The Nazis were not getting nukes so once anyone on the other side did they were doomed. The Japanese were doomed absent extraterrestrial intervention. I mean that literally. If a meteorite of sufficient size had landed on a major US city maybe the US would have pulled out. Otherwise the Japanese were fucked from the word go. The Italians are irrelevant.
For the easiest data point against the possibility of WW2 being lost by the Allies consider this; the Allies had over 50% of World GDP and had integrated battlegroups, command and control and economic planning. The Nazis had the Italians for allies and could not meaningfully link up with the Japanese.
Absent rocks from space the maximal surviving Nazi state is one of
Hitler dies after annexing Czechoslovakia.
Hitler dies after dividing Poland with Stalin.
I don’t think Stalin would have started a war with a post Hitler Nazi regime so scenario 2 is plausible but scenario 1 is overwhelmingly probable.
My personal favorite theory is that the cold war was quantum suicide on a species-wide level. Since you seem versed in history: seen in counterfactual retrospective, how likely was our survival?
I amn’t that well versed in history but if we could somehow check all branches after V-J Day nuclear weapons being used in anger by one or both sides in 1⁄4 of them would not surprise me. Do keep in mind that it was the 80s before nuclear war would be civilisation ending. Europe and the Soviet Union were toast given nuclear war from ’50 maybe, North America had to wait for ICBMs to be screwed given nuclear war and I can’t remember if it was Brazil or Australia that were the last places to be targetted by civilisation ending numbers of bombs.
Quantam species suicide I doubt. By the time we could end civilisation the Soviet union was a gerontocracy, albeit one that truly thought the US was an existential enemy when it was barely an enemy. but hey, Stanislav Petrov. I don’t know.