MWI, weird quantum experiments and future-directed continuity of conscious experience

Response to: Quantum Russian Roulette

Related: Decision theory: Why we need to reduce “could”, “would”, “should”

In Quantum Russian Roulette, Christian_Szegedy tells of a game which uses a “quantum source of randomness” to somehow make a game which consists in terminating the lives of 15 rich people to create one very rich person sound like an attractive proposition. To quote the key deduction:

Then the only result of the game is that the guy who wins will enjoy a much better quality of life. The others die in his Everett branch, but they live on in others. So everybody’s only subjective experience will be that he went into a room and woke up $750000 richer.

I think that Christian_Szegedy is mistaken, but in an interesting way. I think that the intuition at steak here is something about continuity of conscious experience. The intuition that Christian might have, if I may anticipate him, is that everyone in the experiment will actually experience getting $750,000, because somehow the word-line of their conscious experience will continue only in the worlds where they do not die. To formalize this, we imagine an arbitrary decision problem as a tree with nodes corresponding to decision points that create duplicate persons, and time increasing from left to right:

The skull and crossbones symbols indicate that the person created in the previous decision point is killed. We might even consider putting probabilities on the arcs coming out of a given node to indicate how likely a given outcome is. When we try to assess whether a given decision was a good one, we might want to look at the utilities on the leaves of the tree are. But what if there is more than one leaf, and the person concerned is me, i.e. the root of the tree corresponds to “me, now” and the leaves correspond to “possible me’s in 10 days’ time”? I find myself querying for “what will I really experience” when trying to decide which way to steer reality. So I tend to want to mark some nodes in the decision tree as “really me” and others as “zombie-like copies of me that I will not experience being”, resulting in a generic decision tree that looks like this:

I decorated the tree with normal faces and zombie faces consistent with the following rules:

  1. At a decision node, if the parent is a zombie then child nodes have to be zombies, and

  2. If a node is a normal face then exactly one of its children must also be a normal face.

    Let me call these the “forward continuity of consciousness” rules. These rules guarantee that there will be an unbroken line of normal faces from the root to a unique leaf. Some faces are happier than others, representing, for exmaple, financial loss or gain, though zombies can never be smiling, since that would be out of character. In the case of a simplified version of Quantum Russian Roulette, where I am the only player and Omega pays the reward iff the quantum die comes up “6″, we might draw a decision tree like this:

    The game looks attractive, since the only way of decorating it that is consistent with the “forward continuity of consciousness” rules places the worldline of my conscious experience such that I will experience getting the reward, and the zombie-me’s will lose the money, and then get killed. It is a shame that they will die, but it isn’t that bad, because they are not me, I do not experience being them; killing a collection of beings who had a breif existence and that are a lot like me is not so great, but dying myself is much worse.

    Our intuitions about forward continuity of our own conscious experience, in particular that at each stage there must be a unique answer to the question “what will I be experiencing at that point in time?” are important to us, but I think that they are fundamentally mistaken; in the end, the word “I” comes with a semantics that is incompatible with what we know about physics, namely that the process in our brains that generates “I-ness” is capable of being duplicated with no difference between the copies. Of course a lot of ink has been spilled over the issue. The MWI of quantum mechanics dictates that I am being copied at a frightening rate, as the quantum system that I label as “me” interacts with other systems around it, such as incoming photons. The notion of quantum immortality comes from pushing the “unique unbroken line of conscious experience” to its logical conclusion: you will never experience your own death, rather you will experience a string of increasingly unlikley events that seem to be contrived just to keep you alive.

    In the comments for the Quantum Russian Roulette article, Vladimir Nesov says:

    MWI is morally uninteresting, unless you do nontrivial quantum computation. … when you are saying “everyone survives in one of the worlds”, this statement gets intuitive approval (as opposed to doing the experiment in a deterministic world where all participants but one “die completely”), but there is no term in the expected utility calculation that corresponds to the sentiment everyone survives in one of the worlds”

    The sentiment “I will survive in one of the worlds” corresponds to my intuition that my own subjective experience continuing, or not continuing, is of the upmost importance. Combine this with the intuition that the “forward continuity of consciousness” rules are correct and we get the intuition that in a copying scenario, killing all but one of the copies simply shifts the route that my worldline of conscious experience takes from one copy to another, so that the following tree represents the situation if only two copies of me will be killed:

    The survival of some extra zombies seems to be of no benefit to me, because I wouldn’t have experienced being them anyway. The reason that quantum mechanics and the MWI plays a role despite the fact that decision-theoretically the situation looks exactly the same as it would in a classical world—the utility calculations are the same—is that if we draw a tree where only one line of possibility is realized, we might encounter a situation where the “forward continuity of consciousness” rules have to be broken—actual death:

    The interesting question is: why do I have a strong intuition that the “forward continuity of consciousness” rules are correct? Why does my existence feel smooth, unlike the topology of a branch point in a graph?

    ADDED:

    The problem of how this all relates to sleep, anasthesia or cryopreservation has come up. When I was anesthetized, there appeared to be a sharp but instantaneous jump from the anasthetic room to the recovery room, indicating that our intuition about continuity of conscious experience treats “go to sleep, wake up some time later” as being rather like ordinary survival. This is puzzling, since a peroid of sleep or anaesthesia or even cryopreservation can be arbitrarily long.