Living in Many Worlds
Some commenters have recently expressed disturbance at the thought of constantly splitting into zillions of other people, as is the straightforward and unavoidable prediction of quantum mechanics.
Others have confessed themselves unclear as to the implications of many-worlds for planning: If you decide to buckle your seat belt in this world, does that increase the chance of another self unbuckling their seat belt? Are you being selfish at their expense?
Just remember Egan’s Law: It all adds up to normality.
(After Greg Egan, in Quarantine.[1])
Frank Sulloway said [2]:
Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.
When Einstein overthrew the Newtonian version of gravity, apples didn’t stop falling, planets didn’t swerve into the Sun. Every new theory of physics must capture the successful predictions of the old theory it displaced; it should predict that the sky will be blue, rather than green.
So don’t think that many-worlds is there to make strange, radical, exciting predictions. It all adds up to normality.
Then why should anyone care?
Because there was once asked the question, fascinating unto a rationalist: What all adds up to normality?
And the answer to this question turns out to be: quantum mechanics. It is quantum mechanics that adds up to normality.
If there were something else there instead of quantum mechanics, then the world would look strange and unusual.
Bear this in mind, when you are wondering how to live in the strange new universe of many worlds: You have always been there.
Religions, anthropologists tell us, usually exhibit a property called minimal counterintuitiveness; they are startling enough to be memorable, but not so bizarre as to be difficult to memorize. Anubis has the head of a dog, which makes him memorable, but the rest of him is the body of a man. Spirits can see through walls; but they still become hungry.
But physics is not a religion, set to surprise you just exactly enough to be memorable. The underlying phenomena are so counterintuitive that it takes long study for humans to come to grips with them. But the surface phenomena are entirely ordinary. You will never catch a glimpse of another world out of the corner of your eye. You will never hear the voice of some other self. That is unambiguously prohibited outright by the laws. Sorry, you’re just schizophrenic.
The act of making decisions has no special interaction with the process that branches worlds. In your mind, in your imagination, a decision seems like a branching point where the world could go two different ways. But you would feel just the same uncertainty, visualize just the same alternatives, if there were only one world. That’s what people thought for centuries before quantum mechanics, and they still visualized alternative outcomes that could result from their decisions.
Decision and decoherence are entirely orthogonal concepts. If your brain never became decoherent, then that single cognitive process would still have to imagine different choices and their different outcomes. And a rock, which makes no decisions, obeys the same laws of quantum mechanics as anything else, and splits frantically as it lies in one place.
You don’t split when you come to a decision in particular, any more than you particularly split when you take a breath. You’re just splitting all the time as the result of decoherence, which has nothing to do with choices.
There is a population of worlds, and in each world, it all adds up to normality: apples don’t stop falling. In each world, people choose the course that seems best to them. Maybe they happen on a different line of thinking, and see new implications or miss others, and come to a different choice. But it’s not that one world chooses each choice. It’s not that one version of you chooses what seems best, and another version chooses what seems worst. In each world, apples go on falling and people go on doing what seems like a good idea.
Yes, you can nitpick exceptions to this rule, but they’re normal exceptions. It all adds up to normality, in all the worlds.
You cannot “choose which world to end up in.” In all the worlds, people’s choices determine outcomes in the same way they would in just one single world.
The choice you make here does not have some strange balancing influence on some world elsewhere. There is no causal communication between decoherent worlds. In each world, people’s choices control the future of that world, not some other world.
If you can imagine decisionmaking in one world, you can imagine decision-making in many worlds: just have the world constantly splitting while otherwise obeying all the same rules.
In no world does two plus two equal five. In no world can spaceships travel faster than light. All the quantum worlds obey our laws of physics; their existence is asserted in the first place by our laws of physics. Since the beginning, not one unusual thing has ever happened, in this or any other world. They are all lawful.
Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the twelfth century, which are also beyond your ability to affect. But the twelfth century is not your responsibility, because it has, as the quaint phrase goes, “already happened.” I would suggest that you consider every world that is not in your future to be part of the “generalized past.”
Live in your own world. Before you knew about quantum physics, you would not have been tempted to try living in a world that did not seem to exist. Your decisions should add up to this same normality: you shouldn’t try to live in a quantum world you can’t communicate with.
Your decision theory should (almost always) be the same, whether you suppose that there is a 90% probability of something happening, or if it will happen in 9 out of 10 worlds. Now, because people have trouble handling probabilities, it may be helpful to visualize something happening in 9 out of 10 worlds. But this just helps you use normal decision theory.
Now is a good time to begin learning how to shut up and multiply. As I note in Lotteries: A Waste of Hope:
The human brain doesn’t do 64-bit floating-point arithmetic, and it can’t devalue the emotional force of a pleasant anticipation by a factor of 0.00000001 without dropping the line of reasoning entirely.
And in New Improved Lottery:
Between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.
If you’re thinking about a world that could arise in a lawful way, but whose probability is a quadrillion to one, and something very pleasant or very awful is happening in this world . . . well, it does probably exist, if it is lawful. But you should try to release one quadrillionth as many neurotransmitters, in your reward centers or your aversive centers, so that you can weigh that world appropriately in your decisions. If you don’t think you can do that . . . don’t bother thinking about it.
Otherwise you might as well go out and buy a lottery ticket using a quantum random number, a strategy that is guaranteed to result in a very tiny mega-win.
Or here’s another way of thinking about it: Are you considering expending some mental energy on a world whose frequency in your future is less than a trillionth? Then go get a 10-sided die from your local gaming store, and, before you begin thinking about that strange world, start rolling the die. If the die comes up 9 twelve times in a row, then you can think about that world. Otherwise don’t waste your time; thought-time is a resource to be expended wisely.
You can roll the dice as many times as you like, but you can’t think about the world until 9 comes up twelve times in a row. Then you can think about it for a minute. After that you have to start rolling the die again.
This may help you to appreciate the concept of “trillion to one” on a more visceral level.
If at any point you catch yourself thinking that quantum physics might have some kind of strange, abnormal implication for everyday life—then you should probably stop right there.
Oh, there are a few implications of many-worlds for ethics. Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.
And you should always take joy in discovery, as long as you personally don’t know a thing. It is meaningless to talk of being the “first” or the “only” person to know a thing, when everything knowable is known within worlds that are in neither your past nor your future, and are neither before or after you.
But, by and large, it all adds up to normality. If your understanding of many-worlds is the tiniest bit shaky, and you are contemplating whether to believe some strange proposition, or feel some strange emotion, or plan some strange strategy, then I can give you very simple advice: Don’t.
The quantum universe is not a strange place into which you have been thrust. It is the way things have always been.
1. Greg Egan, Quarantine (London: Legend Press, 1992).
2. Robert S. Boynton, “The Birth of an Idea: A Profile of Frank Sulloway,” The New Yorker (October 1999).
- Thou Art Physics by 6 Jun 2008 6:37 UTC; 150 points) (
- On infinite ethics by 31 Jan 2022 7:04 UTC; 128 points) (
- On infinite ethics by 31 Jan 2022 7:17 UTC; 94 points) (EA Forum;
- Adding Up To Normality by 24 Mar 2020 21:53 UTC; 84 points) (
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 72 points) (
- Why Does Power Corrupt? by 14 Oct 2008 0:23 UTC; 63 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- My Kind of Reflection by 10 Jul 2008 7:21 UTC; 61 points) (
- Gears Level & Policy Level by 24 Nov 2017 7:17 UTC; 61 points) (
- Inseparably Right; or, Joy in the Merely Good by 9 Aug 2008 1:00 UTC; 57 points) (
- Reality is weirdly normal by 25 Aug 2013 19:29 UTC; 55 points) (
- For The People Who Are Still Alive by 14 Dec 2008 17:13 UTC; 45 points) (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Mach’s Principle: Anti-Epiphenomenal Physics by 24 May 2008 5:01 UTC; 41 points) (
- Cynicism in Ev-Psych (and Econ?) by 11 Feb 2009 15:06 UTC; 37 points) (
- And the Winner is… Many-Worlds! by 12 Jun 2008 6:05 UTC; 28 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- Dying in Many Worlds by 21 Dec 2012 11:48 UTC; 25 points) (
- 13 Mar 2013 4:32 UTC; 18 points) 's comment on Don’t Get Offended by (
- Timeless Modesty? by 24 Nov 2017 11:12 UTC; 17 points) (
- If physics is many-worlds, does ethics matter? by 10 Jul 2019 15:28 UTC; 14 points) (EA Forum;
- Quantum Mechanics and Personal Identity by 12 Jun 2008 7:13 UTC; 14 points) (
- Quantum Physics Revealed As Non-Mysterious by 12 Jun 2008 5:20 UTC; 13 points) (
- Ethics in Many Worlds by 6 Nov 2020 23:21 UTC; 8 points) (
- Ethical Implications of the Quantum Multiverse by 18 Nov 2024 16:00 UTC; 7 points) (
- 22 Oct 2021 6:51 UTC; 6 points) 's comment on If Superpositions can Suffer by (EA Forum;
- 7 Jul 2012 21:48 UTC; 6 points) 's comment on Stupid Questions Open Thread Round 3 by (
- [SEQ RERUN] Living in Many Worlds by 28 May 2012 19:03 UTC; 6 points) (
- 8 Jul 2012 1:17 UTC; 5 points) 's comment on Stupid Questions Open Thread Round 3 by (
- 22 Jun 2021 2:33 UTC; 5 points) 's comment on An animated introduction to longtermism (feat. Robert Miles) by (
- Rationality Reading Group: Part S: Quantum Physics and Many Worlds by 28 Jan 2016 1:18 UTC; 5 points) (
- 14 Apr 2012 18:01 UTC; 3 points) 's comment on Disguised Queries by (
- 14 May 2011 23:18 UTC; 2 points) 's comment on Rationality Quotes: January 2011 by (
- 1 Aug 2012 22:09 UTC; 2 points) 's comment on Becoming a gene machine—what should change? by (
- 15 Jun 2009 16:14 UTC; 2 points) 's comment on Rationality Quotes—June 2009 by (
- 12 Feb 2023 4:18 UTC; 1 point) 's comment on Inequality Penalty: Morality in Many Worlds by (
- 6 Nov 2020 23:25 UTC; 1 point) 's comment on Ethics in Many Worlds by (
- 21 Apr 2009 16:57 UTC; 0 points) 's comment on The True Epistemic Prisoner’s Dilemma by (
- 18 May 2017 22:05 UTC; 0 points) 's comment on Open thread, May 15 - May 21, 2017 by (
- 6 Jun 2008 1:34 UTC; 0 points) 's comment on Living in Many Worlds by (
One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they’re “ontop of us” and out of phase so we can’t see them, or do they propagate “sideways”, or is it nonsensical to even talk about it?
It’s nonsensical to talk about in in the terms you give, though it’s not all that difficult to talk about with a little background.
There points in an infinitely, or at least very, large dimensional configuration space. Three dimensions in the configuration space corresponds to the position of one particle in that universe. I’d suggest reading Classical Configuration Space and The Quantum Arena. That is, unless you’ve already read it some time in the four years since you posted that comment.
Your decision theory should (almost always) be the same,…
Where is the exception?
“constantly splitting into billions of other people, as is the straightforward and unavoidable prediction of quantum mechanics”
Quantum mechanics does not even “straightforwardly and unavoidably” predict the splitting of an electron. It predicts the splitting of the electron’s wavefunction, but what that means is the whole question. Under some interpretations, the wavefunction will be derived from ordinary probability after all, and reifying it—supposing it to be an independent element of reality—is Mind Projection Fallacy. Under other interpretations, the wavefunction corresponds to an ensemble of independent histories, and there is no splitting. Only under what I called Parmenidean Many Worlds does the splitting of wave packets correspond to an actual multiplication of entities—and then one has to suppose that the entities in question only exist vaguely.
Of course, some of what you say applies to other situations where one has a very large number of near-duplicates, as in a spatially infinite universe.
But, by and large, it all adds up to normality. If your understanding of many-worlds is the tiniest bit shaky, and you are contemplating whether to believe some strange proposition, or feel some strange emotion, or plan some strange strategy, then I can give you very simple advice: Don’t.
Good to know.
Why tell readers that their other selves in other worlds are dying of cancer, so they should really think about cryonics, and then go on and make a post like this?
If I can’t even get a glimpse of these other worlds, and my decisions don’t alter them, why would that make utilitarianism seem more valid (it isn’t)?
One definite exception is “should I say that many-worlds is true?”
Eliezer’s claimed exception is “Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.”
This argument doesn’t seem very strong to me. I could just as well say, “I don’t need to worry about a high average quality of life, because the average is fixed, and is as high as it can be in any case. I just want to see as many people in my world as I can, in the worlds that are my responsibility.”
It looks to me like Eliezer already preferred average utilitarianism even before knowing about many-worlds, or at least independently of this fact, and is using many-worlds to justify his preference.
Eliezer has argued in the past against discount rates: and with some reasonableness, whether this is ultimately correct or not (I don’t know.) But the principles of this argument would imply that we also should discount the value of people in the worlds we are not in; and so given that the average utility over all worlds is constant, average utilitarianism implies that our choice of worlds does not matter, which implies that none of our choices matter.
Besides (in the usual single world): is Eliezer willing to kill off everyone except the happiest person, therefore raising the average?
Well said.
In a few worlds, there are simulations in which spaceships travel faster than light. Minor nitpick.
It’s still meaningful to talk of being the first or only person in your world; and, while this may not affect the point generally, the motive to independently work out something takes a big hit from the knowledge that you could just look it up.
correction: “we also should NOT discount the value of people etc.”
So what tools do all you self-improving rationalists use to help with the “multiply” part of “shut up and multiply”? A development environment for a programming/scripting language? Mathematica? A desk calculator? Mathcad? Spreadsheet? Pen and paper?
Unknown, don’t say “our choice of worlds”. Our decisions don’t determine which world we’re in (there is no preexisting “you” that goes into one world but not another), they determine the relative measures of the worlds, so the average is not fixed (or, rather, not fixed independently of our actions—this is really just the old argument over determinism).
I would also like to hear Eliezer’s answer to your final question.
“One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they’re “ontop of us” and out of phase so we can’t see them, or do they propagate “sideways”, or is it nonsensical to even talk about it?”
It’s nonsensical. The space that we see is just an artifact of a lower level of reality. See http://www.acceleratingfuture.com/tom/?p=124.
“And you should always take joy in discovery, as long as you personally don’t know a thing.”
I generally give independent, replicated discoveries the same “joy status” (if that makes sense) as first-time-in-this-branch discoveries. However, you should take a hit when you’re just rereading someone else’s work, which isn’t as challenging, or as fun.
One place I tend to think differently in the context of multiverse theories is behavior that puts other people at risk. Occasionally I am in a hurry and drive too fast through a residential neighborhood. Then afterwards, I think it’s lucky that no young children came running out into the street at the time, I might not have seen them and been able to stop in time. But in the context of the MWI, it did happen in some worlds. My reckless action did not merely have a probability of causing harm, it did cause genuine harm. I directly reduced the measure of living beings. It’s true, I didn’t see the results of my actions; it is a bit like tossing a hand grenade over a wall. I don’t see what happens, but I know bad things did happen out of my sight.
Thinking like this has perhaps moderated some of my more reckless tendencies. I’m not committed to multiverse models but they do seem to have Occam’s razor in their favor.
Some commenters have recently expressed disturbance at the thought of constantly splitting into billions of other people, as is the straightforward and UNAVOIDABLE prediction of quantum mechanics.
Please. Generating so many paragraphs here displaying this sort of smug assurance in your own conclusions about highly controversial topics is the exact opposite of “overcoming bias”.
I have noticed Robin gently reminding you of this fact; perhaps it is time to pay some attention to him, if not your other critics. . .
The only place where I see it not summing to normality is quantum immortality—any thoughts?
Taking inspiration from Mike Blume’s point, how many human beings should have lived for there to be a reasonable chance say 75% that one of them is immortal in our universe?
Prakash, I thought the point of quantum immortality was that everyone is “immortal” because everyone has a duplicate who lives on, however improbably, in some branch of the wavefunction, no matter what happens here.
But the probability that anyone is immortal in any specific branch is basically zero. There is a nonzero probability of death per unit time, and so the probability of literal immortality is infinitesimal, being a product of infinitely many quantities less than 1.
Please. Generating so many paragraphs here displaying this sort of smug assurance in your own conclusions about highly controversial topics is the exact opposite of “overcoming bias”.
One person doesn’t need to pretend that he doesn’t grasp something until a certain critical mass of the “right” people catch up. Correctness isn’t up for a vote, and the feeling that it is is nothing more than an artifact of social wiring.
You do not have to accept the conclusion. You also do not have to insist that someone else mimic your own uncertainty about any given topic. At the least, perhaps you should go and make sure his reasoning is flawed before you do.
Unknown:
No. Because that creates Death events, which are very large negative utilities. It increases the average number of people who experience short lives.
I have a suspicion that when all is said and done and known, quantum immortality is not going to work out.
Where on Earth are you pulling this from? If my memory serves me, I converted to average utilitarianism as a direct result of believing in a Big World.
You’ve just violated Egan’s Law; your statement does not add up to normality. The quality of life is not, intuitively, “fixed” in the one normal world you once thought you lived in; you care about the future and try to steer it. Quality of life is not independent of your decisions in many-worlds, either.
iwdw:
Because some of their future selves, within their present self’s control and responsibility, will go on to suffer the same fate.
Mitchell Porter: There is a nonzero probability of death per unit time, and so the probability of literal immortality is infinitesimal, being a product of infinitely many quantities less than 1.
This is mathematically incorrect. If the quantities tend to 1 fast enough, their product will converge to a positive number. For example, if you have a 1⁄2 chance of living another 50 years, then if you do, a 3⁄4 chance of another 50, then a 7⁄8 chance of another 50, and so on, the probability that you will never die is about 0.29.
So the trick is to always be getting better fast enough at not dying.
For new converts to the idea of Many Worlds, I offer this parable, as a warning.
Richard, that’s a good observation, it deserves a place in physics-of-immortality folklore as a criterion for whether futures-with-true-immortality form a set of more than measure zero in your preferred physics. Perhaps it will inspire some future Dyson or Tipler to show us that if only we can make it to the endless final age of the universe incarnated in the appropriate form, the rate of fatal component failure might behave as you describe.
Control? Such things as decisions exist, but control has to be an illusion, the decisions have already been made since the beginning of the world. You cannot affect thickness of worlds, they have been set down from forever.
Pearson, your intuitions about time would seem to be running wild in the absence of their subject matter. The future is not determined before you make your decision; the timeless perspective is just that, timeless, not eternal. And of course that adds up to normality, too.
mitchell porter: There is a nonzero probability of death per unit time, and so the probability of literal immortality is infinitesimal, being a product of infinitely many quantities less than 1.
The problem is that probability is also relative: for example, if you inevitably die in 99% of MWI worlds every second, and live on normally in the rest of them, you still make the same decisions, you grow used to remaining alive; you can’t do anything about those 99%, so you don’t include that fact in your decision-making. More generally, a universe that disintegrates in 99% of the cases, will evolve the same kind of intelligent decision-makers as the universe that doesn’t disintegrate.
Quantum immortality is still mysterious to me, although spectrum of near-dead states that are more likely then alive-and-fine states makes it a sour prospect in any case.
“You can roll the dice as many times as you like, but you can’t think about the world until 9 comes up twelve times in a row. Then you can think about it for a minute. After that you have to start rolling the die again.”
You may have hit upon a great way in general to get people to put improbable but vivid risks in perspective. Scared of flying on that plane? Don’t take youre flight only if you role your X sided dice and get the number C, y many times. Etc. It’s worth expanding upon as a solution to how irrational fears can warp the actions of generally fairly rational people.
“Egan’s Law: It all adds up to normality.”
Quantum computers already violate normality for me.
I honestly don’t see what the argument for quantum immortality even is. You don’t randomly become one of your successors, you become all of them, including the dead ones.
Since the beginning, not one unusual thing has ever happened, in this or any other world. They are all lawful.
Lawful evil to be precise.
steven: To much D&D? I prefer chaotic neutral… Hail Eris! All hail Discordia! =)
I honestly don’t see what the argument for quantum immortality even is. You don’t randomly become one of your successors, you become all of them, including the dead ones.
The argument is this: if having living successors is just as good as our naive concept of survival, then it seems we’re guaranteed to always have something as good as that naive concept. It seems like MWI is telling us that, in almost any circumstances, we will always have some successors that are still alive.
The dead ones don’t enter into it. You can’t experience being dead.
But it’s not exactly obvious to a man-on-the-street like me that we would always have successors: one could imagine someone whose situation was so dire that he had zero successors.
(Of course, he himself is a successor to someone who has successors other than him, but that’s not as good.)
You can’t experience being dead.
Why not call being dead the null experience? Definitions shouldn’t matter like this.
Here’s a question from a layman: if untold trillions of new universes are being created all the time, where is all that energy coming from to create them?
“Your decision theory should (almost always) be the same, whether you suppose that there is a 90% probability of something happening, or if it will happen in 9 out of 10 worlds. ”
I STRONGLY disagree here. If you suppose there is a 90% probability of something happening this usually means that you haven’t updated your priors enough to recognize that it actually happens in approximately 100% of worlds, and less frequently (but sadly, probably not 9 times less frequently) that you haven’t updated enough to recognize that it actually almost never or outright never happens.
“Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space.”
If many worlds meant infinitely many people this claim would be quite plausible to me, but why should aggregation stop mattering just because there are bignum people?
I’m befuddled at the average utilitarianism thing.
First—how could the truth of a fact of physics (big worlds) ever be relevant to the truth of an ethical theory like average vs total utilitarianism?
Second—“You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.” is not AFAIK the same thing as average utilitarianism; average utilitarianism would average across everything, including what’s not your responsibility. This matters for concrete prescriptions.
Third—suppose someone has an extremely high quality of life, but a bit lower than the average; are you really going to tell him you regret his being born? It just seems absurd.
Fourth—it sounds like average utilitarianism requires an unambiguous binary way to decide whether you’re a continuation of some past person-stage.
“Death events, which are very large negative utilities. It increases the average number of people who experience short lives.” also seems very dubious to me. Too much like a justification aimed at pushing “it all adds up to normality” past its breaking point, back to a nice normal sort of world where death is bad, and so is signing up for cryonics, not marrying, not valuing family FAR above outsiders, not eating exclusively your tribes food and engaging in exclusively your tribe’s sexual practices, etc. OTOH, I do see a very small chance (vagueness in my thought, not low measure or low calibrated probability) that death actually is VERY bad, not just neutral, coming from quantum immortality, as most of the ways of surviving many causes of death may be much worse than neutral.
Also: consider a computer programmed to create person-stages and then erase them all the time, maybe giving each one a moment of (bland, neutral) experience. In average utilitarianism building these things is either almost the best or almost the worst thing you can do depending on whether everyone else is unhappy or happy. I don’t find this plausible.
Quantum immortality seemed to work when I was imagining my consciousness as a thread running through the many worlds, one that couldn’t possibly enter a world where I was dead. But if I understand rightly, consciousness is not like this, it is not epiphenominal, it is not a thread that runs through one world and not the others, it is splitting along with the world around me and the rest of my body.
So if I undergo the classic 50⁄50 decaying radioactive particle + gun experiment, it would seem to me that I have a 50% chance of my consciousness surviving and a 50% of it going ping out of existence when the bullet pulverises my brain.
If that even makes sense, then I’ve managed to understand a lot more of the quantum mechanics and zombie sequences than I thought I had.
I was hoping someone would bring up quantum immortality because that was what came to mind at the end of the post. Shooting myself in the head, on the assumption that quantum immortality will make the gun jam every time, would be a great party piece but it would certainly count as a strange strategy.
Just by the by, it might be a good party piece for you, but it would be a truly horrible party piece for half the people you performed it to.
“Here’s a question from a layman: if untold trillions of new universes are being created all the time, where is all that energy coming from to create them?”
Well, you’ve got the same problem with a single world: Where did the energy for our ‘single’ Universe when ‘it was created’ came from?
The problem here is that you assume that universes are created which did not exist before; in this case you indeed need to take the energy from somewhere. But as I understand, they never did not exist (beware of double negation!). They already existed before the split took place in your personal memory.
But somehow I still can’t buy into this thing; where is the symmetry? Why do splits happen into the future, but not into the past?
Of course, we evaluate the past according to the information we retrieve over time (that’s the whole point of Bayes/Markov, isn’t it?). In this way you can say, that with every bit of information/evidence, our memory makes a split into the past. In this way ‘fresher’ memory gets mixed up with ‘decaying’ memory and thus we get a different/more diffuse image of the past.
But it doesn’t sound the same like the ‘future’ splits. We don’t have a fresh memory of the future; taking the example of lotteries. We don’t remember their outcome seconds before.
Splits happen forward in time for the same reason a glass which has fallen and smashed on the floor doesn’t spring back up and spontaneously reassemble itself. And these “universes” are really just isolated amplitude blobs in the total, timeless, wavefunction: they aren’t created; rather any amplitude blob roughly factorizing as a “world” will eventually decohere into several smaller amplitude blobs also factorizing as “worlds” which as the wavefunction further evolves with time do not interact (i.e. they interact about as often as that glass reassembles).
One person doesn’t need to pretend that he doesn’t grasp something until a certain critical mass of the “right” people catch up. Correctness isn’t up for a vote, and the feeling that it is is nothing more than an artifact of social wiring.
Anyone with a bit of insight and experience with the sociology of group behavior will read OB and see some glaringly obvious “artifacts of social wiring” in the psychology behind many of the postings and comments here.
It all adds up to normality, in all the worlds.
Eliezer, you say this, and similar things a number of times here. They are, of course, untrue. There are uncountably many instances where, for example, all coins in history flip tails every time. You mean that it almost always adds up to normality and this is true. For very high abnormality, the measure of worlds where it happens is equal to the associated small probability.
Regarding average utilitarianism, I also think this is a highly suspect conclusion from this evidence (and this is coming from a utilitarian philosopher). We can talk about this when you are in Oxford if you want: perhaps you have additional reasons that you haven’t given here.
Quantum immortality seemed to work when I was imagining my consciousness as a thread running through the many worlds, one that couldn’t possibly enter a world where I was dead. But if I understand rightly, consciousness is not like this, it is not epiphenomenal, it is not a thread that runs through one world and not the others, it is splitting along with the world around me and the rest of my body.
Right, it’s less like a thread and more like a tree.
So if I undergo the classic 50⁄50 decaying radioactive particle + gun experiment, it would seem to me that I have a 50% chance of my consciousness surviving and a 50% of it going ping out of existence when the bullet pulverises my brain.
I don’t understand this at all (if we’re assuming MWI is true). If MWI is true, a person survives (or rather, has something as good as survival) by having “successors”—that is, beings who remember being him.
In the 50⁄50 case, he has half as many successors as he would normally have. But it’s not obvious why this should really trouble him (aside from knock-on effects on his loved ones in the half of existence where he dies, etc).
Sorry for the double post, but I just had a “Eureka moment”, and I think I can now explain the intuitive appeal of the idea of Quantum Immortality. It might still be wrong, but I can explain the appeal.
As above, a “successor” is a being who is psychologically continuous with you and remembers being you, et cetera. I want to consider 4 cases:
Case 1: MWI is false. In almost any normal circumstances (i.e. not involving teleporters or uploading), a person either has one successor or zero.
Case 2: MWI is true. In normal circumstances, a person has a huge number of successors.
Case 3: MWI is true. A person undergoes the 50⁄50 experiment, and still has a very large number of successors (though only half as many as in Case 2).
Case 4: MWI is true. A person undergoes a 1000⁄1 experiment, in which he dies with 99.9% probability. But because MWI is true, he still has a large number of successors (though only 0.1% of the number in Case 2).
Quantum Immortality is appealing insofar as Case 4 still seems to be better than Case 1. In other words, the combination of [MWI true, event with high chance of death] intuitively seems better than [MWI false, ordinary boring event].
As I said, it can still be wrong, but I think it’s appealing for reasons along these lines.
Another area where the MWI makes a difference is the free will vs determinism debate. The MWI unlike most other quantum interpretations is fully deterministic. There is no longer such a thing as “quantum randomness”. The apparent randomness is basically an illusion due to our consciousness progressing into multiple worlds with multiple outcomes.
This means that in a sense, the MWI returns us to the classical Newtonian universe of clockwork billiard balls clicking together as the basis for reality. It is not precisely that model of course, but the physics is just as deterministic. Hence we are back to the old puzzle of reconciling our feelings of free will with the fact that all of our decisions are ultimately completely determined by factors outside of ourselves.
For most of a century, certain schools of philosophy have fastened onto the supposed randomness of QM as a source of variation that could explain free will. The problem I always saw was that basing free will on quantum randomness might explain the freedom but not the willfulness; being at the mercy of quantum events seems to give no place for asserting control over one’s actions and decisions. But then, some philosophers have claimed that brains could perhaps influence quantum events, pointing to the supposed collapse of the wave function being caused by consciousness as precedent. And we all know how deep that rabbit hole goes.
While this issue does not have any practical implications that I know of, it does mean that one facile escape from the dilemma is no longer available to believers in the MWI.
Eliezer, I said “It looks to me like...” to indicate a very subjective impression based on the text of your post. If it wasn’t true, that’s fine.
In that case, though, I don’t see why your conclusion in favor of average utilitarianism isn’t just as much a violation of Egan’s Law (to the extent that there is such a law) as anything I said. An illustration of this is Michael’s Vassar’s point that your claim about death was made precisely in order to move away from the counter intuitive implications of average utilitarianism, more towards the position of total utilitarianism.
Let’s suppose that you are the only person in the world. Without talking about death, if you had the power to create a universe, would you rather create a universe with 5 people in it (including you) who each had 149,146.2414 degrees of happiness, or one with 5 billion people in it, each with 149,146.2413 degrees of happiness?
(These “degrees” of happiness are simply an arbitrary measure in order to make my point, much like in the torture and dust specks discussion.)
Hence we are back to the old puzzle of reconciling our feelings of free will with the fact that all of our decisions are ultimately completely determined by factors outside of ourselves.
The part I bolded is never necessary, is it? Factors in the deterministic processes in my brain are factors inside myself, by definition. Is there really still a debate about free will? I’m at a loss to understand why. The subjective perception of free will is easily explainable in a fully deterministic world.
Here’s a question from a layman: if untold trillions of new universes are being created all the time, where is all that energy coming from to create them?
I’m not sure this question is meaningful in context. It would be like asking where all that time is coming from. For both, the timelessness discussion suggests that they all just are. Everything is. Universes are not coming into being or leaving, and there is no “now” pointer that is sliding along a timeline. It is also not meaningful to ask “where” they are.
But they feel meaningful, as does “now.” That presumably means that I should gut-assimilate MWI.
Your main argument is “Learning QM shouldn’t change your behavior”. This is false in general. If your parents own slaves and you’ve been taught that people in Africa live horrible lives and slavery saves them, and you later discover the truth, you will feel and act differently. Yet you shouldn’t expect your life far away from Africa to be affected: it still adds up to normality.
Some arguments are convincing (“you can’t do anything about it so just call it the past” and “probability”), but they may not be enough to support your conclusion on their own.
So, these universes aren’t really being created, but have always existed? That is easier to comprehend for me. Not that the multiverse needs my comprehension or anything.
Allan Crossman comments that
in the 50⁄50 case, he has half as many successors as he would normally have. But it’s not obvious why this should really trouble him.
Did everyone get what Crossman is saying? He is saying that it is not obvious to him why a MWI believer would hesitate particularly to play quantum Russian roulette with bullets in half of the chambers of the revolver!
Clearly then Crossman disagrees completely with Eliezer, who writes in this blog entry that Your decision theory should (almost always) be the same, whether you suppose that there is a 90% probability of something happening, or if it will happen in 9 out of 10 worlds.
So if I undergo the classic 50⁄50 decaying radioactive particle + gun experiment, it would seem to me that I have a 50% chance of my consciousness surviving and a 50% of it going ping out of existence when the bullet pulverises my brain.
Exactly. QI just doesn’t work many would like it to work. Consider “Quantum Immortality Lite”, where you load the gun with a sleeping pill instead of a deadly bullet. This version is easier for humans to visualize, because it involves no “mysterious” (previously not experienced) phenomena, such as permanently ceasing to exist. The line of reasoning is the same: awake copies of the experimenter will notice that he never seems to fall asleep, but you will be happily snoring on the floor with near certainty if you pull the trigger ten times. It won’t always jam for you.
Put me down as a long time many-worlder who doesn’t see how it makes average utilitarianism more attractive.
I believe Eliezer’s point is that the MWI nerfs the argument that we are depriving nonexistant people of their existence by not creating them. If the universe is as vast as the MWI implies then they already exist out there somewhere.
However, that particular argument isn’t really a utilitarian argument in the first place. It’s more of an egalitarian argument. I think a total utilitarian would be quite willing to never create a person if that person would be slightly less happy than the umpteenth copy of Felix.
Did everyone get what Crossman is saying? He is saying that it is not obvious to him why a MWI believer would hesitate particularly to play quantum Russian roulette with bullets in half of the chambers of the revolver!
Sort of, with the necessary caveats that we’re assuming he doesn’t care about how the act affects other people, and also isn’t worrying about the possibility of surviving in a brain-damaged state, etc.
Clearly then Crossman disagrees completely with Eliezer
That’s a bit strong. I said something wasn’t obvious to me, which is hardly the same as complete disagreement. :-)
QI just doesn’t work many would like it to work. Consider “Quantum Immortality Lite”, where you load the gun with a sleeping pill instead of a deadly bullet.
If you undergo the “quantum suicide” experiment but with sleeping pills instead of bullets, you will have just as many “successors” as if you had done nothing at all. All of the versions of you that go to sleep wake up later.
Since they’re alive and remember being you, nothing stops them from counting as true successors. This is different from dead people.
Yudowsky, excuse the flowery language in my last post. Let me put it like this. What meaning has control when you can’t change the future.
In your own words. “When your extrapolation of the future changes, from one time to another, it feels like the future itself is changing. Yet you have never seen the future change. When you actually get to the future, you only ever see one outcome.
How could a single moment of time, change from one time to another?”
And while I agree it does all add up to normality. What I object to is mixing the levels of description: control is fine in normality but control is not okay when discussing world thickness, number of descendants in certain paths, etc. Because these are fixed, you just have no memory of them.
Since they’re alive and remember being you, nothing stops them from counting as true successors. This is different from dead people.
Unless they’re freshly dead, so they could theoretically be cryonicized. So should we expect to stay freshly dead forever?
If you’re freshly dead you shouldn’t expect anything at all.
But I’m sure that’s not quite what you mean. As I understand it, Quantum Immortality is the view that the only way to really die is to have no “successors” at all, where a successor is loosely defined as someone who remembers being you.
I think that’s all it is. It’s not claiming that the universe will go to special lengths (beyond ordinary MWI) to ensure that you do indeed have such successors. But if ordinary MWI implies that even bizarre events, like corpses not degrading, actually happen in tiny branches of reality, then your scenario is one way to have successors.
A freshly dead person has no experience. Some of your successors, though, would be cryonically revived—a much larger fraction, if you’ve actually signed up for cryonics.
Michael:
The point, I think, is not that there are bignum people, but that you don’t have to worry about any possible people not ‘getting the chance to live’. I see the appeal of this, but am not really swayed—Steven’s last objection in particular seems very strong.
I really enjoyed the quality of the comments on this thread.
Am I the only one reading “freshly dead” and thinking of The Princess Bride? Billy Crystal proves that cryonics works!
the straightforward and unavoidable prediction of quantum mechanics.
Newtonian mechanics makes many straightforward and unavoidable predictions which do not happen to be true. I assume that no one has ever tested this prediction, or you would have given the test results to back up your assertion.
Just a thought.
I’ve read many discussions debating “quantum immortality” over the years. They never seem to get anywhere.
Is QI true? Should you expect to be immortal? This seems like one of those “wrong questions” that Eliezer talks about. That is, there’s really no way even in principle to figure out if it’s true. Suppose the MWI is correct and you play Russian roulette and repeatedly find yourself surviving, seemingly way too often for it to be chance. Well, by the MWI you’d predict that somewhere in the multiverse there would exist a successor of yourself who would have exactly that experience. So the fact that you find yourself being that successor does not prove that playing Russian roulette is harmless. Your amplitude (probability) is greatly reduced, and whether you view that as harmful or not may depend on other considerations. Normally you do care very much about probabilities, although perhaps you can make an argument why you should not care in this case. Either way, the fact that you are alive doesn’t by itself answer the question.
How about this, though. Suppose you were uncertain about the MWI versus other interpretations. Would finding yourself alive after many trials of Russian roulette be evidence in favor of the MWI? I don’t think so, although I’m not 100% sure of my reasoning. Try the standard Bayesian approach. The probability of finding myself alive in a conventional collapse interpretation is very low. Now we are tempted to say that the probability of finding myself alive in the MWI is high, in fact it is certain that I will survive in some branches. And if this is correct, then survival does strongly argue in favor of the MWI, by Bayes’ theorem.
But is it right to say that the probability of finding myself alive is certain, in the MWI? We know that in those branches where I survive, my quantum amplitude (probability) is greatly reduced. Normally in the MWI if we are going to use Bayesian reasoning, we have to discount branches by their probability weighting, or else we are going to get the wrong answer. We can’t just treat all branches as equally probable. But if we apply that discounting in this case, the Bayesian argument in favor of the MWI goes away. The probability we need to use for finding ourselves alive in the MWI is just as low as it is in a conventional collapse interpretation. Hence even very low probability survival is not evidence for the MWI. (BTW I think I am reconstructing an argument from Wei Dai many years ago on his everything-exists mailing list.)
Hal, I’m afraid I’ve failed to understand your argument, probably because I’m not properly versed in Bayesian reasoning. So maybe I should just shut up (though you encouraged us earlier to engage in topics beyond our understanding). But anyway, this sentence jumps out at me:
“Normally in the MWI if we are going to use Bayesian reasoning, we have to discount branches by their probability weighting, or else we are going to get the wrong answer.”
What I would ask is: is the quantum suicide case sufficiently “normal”? It seems like it’s a profoundly abnormal case. In the normal case, you’re going to be around to observe the results, regardless of what happens.
I think the idea behind using (repeated rounds of) the 50⁄50 experiment to prove QI is that the experiment leverages this “Observer Selection Effect” in a way that other experiments don’t.
But as I say, Bayes is currently above my understanding, so I’m kind of stabbing in the dark here.
Don’t we all the time bring some sense of steering through many worlds into our experience? So are there more and less auspicious choices to be made? What is this normality you speak of?
From Evans Pritchard:
In Zandeland sometimes an old granary collapses. There is nothing remarkable in this. Every Zande knows that termites eat the supports in [the] course of time and that even the hardest woods decay after years of service. Now a granary is the summerhouse of a Zande homestead and people sit beneath it in the heat of the day and chat or play the African hole-game or work at some craft. Consequently it may happen that there are people sitting beneath the granary when it collapses and they are injured, for it is a heavy structure made of beams and clay and may be stored with eleusive [millet] as well. Now why should these particular people have been sitting under this particular granary at the particular moment when it collapsed? That it should collapse is easily intelligible, but why should it have collapsed at the particular moment when these particular people were sitting beneath it?
Thanks and great admiration for your project.
Allan—My argument is pretty hand-wavey at this point. I would have to try to develop it in more detail to see if it really holds. Maybe if we ever have a subsequent thread to discuss QI, I will try to bring it forward at that time.
One point which has not been mentioned here, I don’t think, is the corollary to QI, what is called Quantum Suicide. This is where you buy a lottery ticket, and set up a machine to monitor the results and instantly and painlessly kill you if you don’t win. Then, if you find yourself alive afterwards, you will have won the lottery. So to believers in QI, this is a way of guaranteeing that you win the lottery.
(H.Finney wrote:) “But then, some philosophers have claimed that brains could perhaps influence quantum events, pointing to the supposed collapse of the wave function being caused by consciousness as precedent. And we all know how deep that rabbit hole goes.”
How deep does it go? Penrose’s (a physicist) quantum brain components (an aspect of neurobiology and philosophy of mind) don’t seem to exist, but I had to dig up ideas like the “cemi field theory” on my own, in past discussions on this topic (which always degenrated to uploading for immortality and cryonics); they certainly weren’t forwarded by free-will naysayer robots.
“(EY wrote:) If you’re thinking about a world that could arise in a lawful way, but whose probability is a quadrillion to one, and something very pleasant or very awful is happening in this world… well, it does probably exist, if it is lawful. But you should try to release one quadrillionth as many neurotransmitters, in your reward centers or your aversive centers, so that you can weigh that world appropriately in your decisions. If you don’t think you can do that… don’t bother thinking about it.”
What if it is a fifty-fifty decision? If I see a pretty girl who is a known head-case, I can try to make the neural connection of her image with my boobies-Marylin-Manson neuron. Once I start to use abstract concepts (encoded in a real brain) to control chemical squirts, I’m claiming the potential for some limited free will. I doubt there are any world-lines where a computer speaker materializes into my lungs, even though it is physically possible. But if I think I’d like to crush the speaker into my chest, it might happen. In fact, I’d bet world-lines split off so rarely, that there isn’t a single world’line where I attack myslef with a computer speaker right now. Has anyone read recent papers describing what variables limit decoherence assuming MWI? To my knowledge, photon effects only demonstrate a “few” nearby photons in parralel worlds.
Don’t faster-than-c solutions to general relavity destroy the concept of MWI as a block universe?
Put me down as a long time many-worlder who doesn’t see how it makes average utilitarianism more attractive.
It seems to me that MWI poses challenges for both average utilitarianism and sum utilitarianism. For sum utilitarianism, why bother to bring more potential people into existence in this branch, if those people are living in many other branches already?
But I wonder if Eliezer has considered that MWI plus average utilitarianism seems to imply that we don’t need to worry about certain types of existential risk. If some fraction of the future worlds that we’re responsible for gets wiped out, that wouldn’t lower the average utility, unless for some reason the fraction that gets wiped out would otherwise have had an average utility that’s higher than the average of the surviving branches. Assuming that’s not the case, the conclusion follows that we don’t need to worry about these risks, which seems pretty counterintuitive.
Wei, that wouldn’t follow if there are such things as Death events; wiping out a planet would increase the average proportion of people who die. I’ve always found it hard to make the numbers add up on anthropics without Death events; then again, I’m starting to find it hard to make it add up even with death. Also, quantum immortality is not necessarily your friend, the worlds in which you survive may not be pleasant.
Is Death capitalized because it is being used in a technical sense?
Eliezer, suppose the nature of the catastrophe is such that everyone on the planet dies instantaneously and painlessly. Why should such deaths bother you, given that identical people are still living in adjacent branches? If avoiding death is simply a terminal value for you, then I don’t see why encouraging births shouldn’t be a similar terminal value.
I agree that the worlds in which we survive may not be pleasant, but average utilitarianism implies that we should try to minimize such unpleasant worlds that survive, rather than the existential risk per se, which is still strongly counterintuitive.
I don’t know what you are referring to by “hard to make numbers add up on anthropics without Death events”. If you wrote about that somewhere else, I’ve missed it.
A separate practical problem I see with the combination of MWI and consequentialism is that due to branching, the measure of worlds a person is responsible for is always rapidly and continuously decreasing, so that for example I’m now responsible for a much smaller portion of the multiverse than I was just yesterday or even a few seconds ago. In theory this doesnât matter because the costs and benefits of every choice I face are reduced by the same factor, so the relative rankings are preserved. But in practice this seems pretty demotivational, since the subjective mental cost of making an effort appears to stay the same, while the objective benefits of such effort decreases rapidly. Eliezer, I’m curious how you’ve dealt with this problem.
I’d really like to see an elaboration on this.
I think that there are deep philosophical implications for many-world theories, including but not limited to quantum many-world theories. If there are many worlds, presumably a large number of them must differ in their most obvious meta-characteristics. Some of these meta-characteristics that I observe are consequence, complexity, and difficulty (that is, across a wide array of phenomena, harmony is possible but not easy. There is no argument that will convince everyone, there is no FTL, there is a great filter...). Thus I can safely presume that inhabitants of the worlds which do not share these meta-characteristics are in some separate anthropic set. Thus, for beings in my anthropic set, I can take these characteristics as moral axioms. I do not argue that they are a source for all moral reasons, except through the difficult mediation of evolution; however, they are a moral bedrock. In other words, they underdetermine my morals, but they do determine them.
Did you ever read “A Fire Upon The Deep”? Obviously, it’s shameless space opera. But it’s a good metaphor for what I think our real situation is. The premise is that there is some kind of “IQ limit” that goes from 0 at the center of the galaxy to infinite outside it. The outside is the domain of the strong AIs, ineffable to human reason, and we are in the grey zone, where intelligence is possible but AI is not. I think that a situation something like this pertains, not over real space, but over the parameter space of multiple worlds. We ARE on the border of God’s Mandelbrot Set, and that there is something special about that. If we ever make it to AGI, for me, that is not a win condition (or a lose condition) but just a boundary condition: I cannot begin to evaluate the conditions of my actions here and now on the world beyond that boundary, so it is beyond my morals. The specialness of our position, and the fact that a world where we attain AGI is not in the same way special, is for me a consequence of the anthropic principle plus many worlds (as I said, quantum and otherwise.)
So many worlds for me is an argument that we should not be in any all-fired hurry to reach AGI, that moral actions within the context of the world-as-we-know-it are more important.
very nice treatment of a complex subject. are you a scientist?
If you are ever interested in actually using quantum randomness to base a decision off of, whether you are up against a highly accurate predictor, can’t decide between two fun activities for the day, or something else where splitting yourself may be of use, then there is a very helpful quantum random number generator here. Simply precommit to one decision in case the ending digit is a 0, and another if the ending digit is a 1, and look at this webpage. Right Here.
Myself, I use this.
If there were something else instead of quantum mechanics, it would still be what there is and would still add up to normality.