Doomsday, Sampling Assumptions, and Bayes
Overview/TLDR
We discuss the doomsday argument, and look at various approaches taken to analyze observation selection effects, namely SSA and SIA. We conclude that the SSA is unsatisfying, and show that the SIA is isomorphic to a version of bayesianism.
An entity undergoing a subjective experience should reason as if they are randomly selected from the distribution of all possible entities undergoing that exact same subjective experience.
We apply the principle to various scenarios, and conclude that whilst the SSA and SIA are wildly different in theory they are equivalent in practice assuming we live in a multiverse.
The Doomsday Argument
Are you gonna drop the bomb or not?
A doomsday argument attempts to predict the chance that humanity will survive a given length of time based purely on the number of people that have lived. A typical informal example might go like this.
If humanity survives thousands of years and spreads to the stars then trillions upon trillions of humans will have lived in total. Therefore if you were to pick a human at random, it would be an incredible coincidence if they happened to be among the first few billion humans to have lived. But you are essentially a human picked at random, and yet by an incredible coincidence you are among the first few billion humans!
If on the other hand humanity will die off in the next 100 years, and in total 100 billion humans will ever live, then it wouldn’t be at all surprising to pick someone who happens to be around the 50 billion person mark.
Hence, it seems more likely that only a small number of humans will ever live than that a large number will love.
Now obviously we have lots of other sources of information we can use to predict how long humanity will survive, but the doomsday argument, if we accept it, will shift these probabilities downwards via a Bayesian update.
The doomsday argument seems crazy at first glance. Predicting the future is common, but that’s usually based on extrapolating from information that’s available now and determines the future—fast forwarding the universe in our minds. He’re we’re not doing that—instead we’re saying that if in the future trillions of humans will live, that somehow affects the chances that we’ll see what we’ll see in the present. It would definitely be nice to put this Doomsday argument on a more formal footing so we can wrestle with it more directly.
Self Sampling Assumption (SSA)
I’m an ordinary man,
Who desires nothing more than an ordinary chance,
The Self Sampling Assumption (from now on: SSA) states that:
All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
What does that mean? The easiest way to explain is probably by applying it, and we’ll apply it to the doomsday argument:
Let’s assume that I admit only 2 possibilities.
P1: Exactly 1 trillion trillion humans will eventually exist
P2. Exactly 100 billion humans will eventually exist.
I am evenly split between these two options.
The only fact I know about the world is that I am human number 51,619,483,216.
Then the reasoning goes as follows:
Take my reference class as all humans. I reason as if I was randomly selected from all humans. I apply straightforward bayesian updating.
Under P1 the chance of being human #51,619,483,216 is . (Only one human has that number).
Under P2 the chance of being human #51,619,483,216 is . (Only one human has that number).
Applying Bayes theorem, the updated value for P2 is
Using more realistic priors which assign some probability to all possible total numbers of humans, not just 100 billion and 1 trillion trillion, we still have overwhelming evidence that a huge number of humans will not exist. The most likely possibility is that exactly 51,619,483,216 billion humans will exist (I am the last human), and the probabilities drop off slowly but steadily from there, such that by the time you get to the 1000′s of trillions it’s effectively 0.
So we’re all going to die. Bummer.
Or perhaps not. Are there any issues with this line of reasoning?
There’s a slight modification to the SSA which accounts for how long each reference class lives, known as the SSSA. However it doesn’t substantially change the argument or the logic, so we can ignore it for now.
It seems at first like the doomsday argument could work just as well on any piece of information, not just what number human I am, meaning that it proves too much and so must be wrong. For example, I have a particular shade of skin. Without loss of generality lets assume it is shade #AXC65BGCCHD. I am the only human that will ever have this exact shade of skin. The chance of me thus being the one human that has exactly this shade of skin is far likelier if only a small number of humans exist than if a large number do. However this argument is incorrect as there is no guarantee that any human at all will have this particular shade of skin, and so the existence of a human with that exact shade is in and of itself evidence that lots of humans exist, which perfectly counterbalances out the opposing evidence.
So we’re going to have to dig deeper to find any problems with the SSA. Let’s repeat it again:
All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
The overall sentiment seems fine, but there’s 2 parts that seem strange. The first is this idea of reference class. I am the 51,619,483,216th homo sapiens, but the 75,198,987,322th homo. Yet the last homo sapiens will almost certainly be the last homo, so which number do I use? How do I decide what is the relevant reference class for a problem, and isn’t the entire concept of reference class entirely arbitrary? Could I choose whatever reference class I wanted (e.g. all humans that will ever exist + all conscious organisms that are already dead) to get whatever result I want?
First we have to understand why we need this reference class concept. As a rule, the SSA will say that if you are unsure which world you’re in, prefer the worlds where things with exactly the same experiences as you are a greater percentage of all observers. So if all you know is that you are the 51,619,483,216th homo sapiens, then in a world with fewer humans a greater percentage of observers will have that property. But equally strongly, if there’s fewer non-human conscious beings, a greater percentage of observers will have that property—I’m more likely to pick the 51,619,483,216th homo sapiens out of a hat if the hat just contains humans than if it contains aliens as well. So therefore the doomsday argument is equally evidence against the existence of aliens (or to be more precise, the very fact I’m a human is evidence against the existence of aliens—although that evidence is perfectly counterbalanced by the fact that very few universes will contain only humans and nothing else).
But is the fact that I’m a human evidence against the existence of rocks? Well rocks are clearly not observers, so presumably not. But what about bacteria? Plants? Insects? Mice? Gorillas? Fetuses? etc. If all I know is I’m a human, am I more likely to be in a world with 1000 humans, 1 human and 10000 apes, or 1 human and a million insects? The SSA will give you different answers depending on where you draw your reference class. However I think it’s clear that consciousness is not a binary switch—there’s no number of neurons you have to have before you magically become conscious. Whatever consciousness is, it clearly is fuzzy and has different levels among different organisms. There’s no obvious point at which you can say—this is an observer, this isn’t.
So what are we meant to do? Weight the distribution of observers by how conscious they are? I can’t see any basis for doing that. Include all things, even if inanimate? That both seems incorrect, and leads to the even more difficult problem of defining what a thing is. Say that probability is inherently subjective? I’m not sure what that would even mean.
Further this actually exists business is suspicious. Firstly, to be clear, actually exists means have existed, does exist, or will exist, in this universe or any other—things a billion years ago in other parts of the multiverse still count. Let’s consider two scenarios:
S1: God creates 2 universes. In U1 he creates 1 earth, and in U2 he creates a billion earths. Which universe are you in?
S2: God flips a coin. If it’s heads he creates 1 universe U1 with 1 earth, and if it’s tails he creates 1 universe U2 with a billion earths. What did god flip?
According to the SSA, in S1 you are randomly picked from all existing observers. Since the vast majority of observers are in the universe with a billion earths, you are almost certainly in that universe.
In S2 you are randomly picked from all existing observers. Either all existing observers are all in U1, or all existing observers are in U2. We don’t get any information at all about which universe we’re in, and hence it’s still 50⁄50 as to what was flipped.
This leads to some strange results. For example, imagine you are the lone conscious being in a universe. You attach yourself to a cloning advice, which is rigged to clone you a thousand times if and only if a specific electron is measured and found to have spin 1.
The next morning you wake up. What is the chance you are a clone?
Now if the many worlds interpretation of quantum mechanics is correct, there are now two worlds. In one 1000 clones exist, and 1 person. In the other just 1 person, so there’s a chance that you are a clone.
If however the Copenhagen interpretation is correct, there is only one world, in which either 0 clones exist, or 1000 clones exist. Since we ignore non-existing observers, the chance you are a clone is .
I personally am deeply suspicious of any claim that the probability an event occurred depends on which interpretation of quantum mechanics you ascribe to.
These two points suggests a refinement of the SSA:
Self Indication Assumption (SIA)
Come with me
And you’ll be
In a world of pure imagination
The Self Indication Assumption (from now on: SIA) states that:
All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.
This solves two problems. The actually exists clause is simply removed, meaning we don’t treat the coin toss case any differently from the case where two universes were created.
This immediately renders the doomsday argument incorrect.
We assume we have a prior distribution of how many people will exist in total—a set of theoretically possible worlds. I’ve been drawn randomly from the set of all people across all these possible worlds. I am human #.
In some of these possible worlds humans exist. I know that I am not in any of these worlds. In all of the others exactly one person exists who is human #. Hence my number provides no evidence as to which of these possible worlds I come from, as I would be equally likely to exist in all of them.
Note also that we don’t mention reference class in this formulation. The reason is that the maths comes out identically no matter what definition of observer we use. We start off by considering that we are drawn from all possible observers. We know however that we happen not to have been selected from those whose experiences do not match our own, and so we must be distributed from the remainder, but still perfectly randomly. That remainder is the same no matter your definition of observer (assuming that any reasonable definition of observer will consider all things with identical experiences to you to also be observers). This will be more clear once we reformulate the SIA in terms of bayesianism.
Bayesianim
To see a World in a Grain of Sand.
And a Heaven in a Wild Flower.
Hold Infinity in the palm of your hand.
And Eternity in an hour.
Bayes Theorem is a mathematical formula that states how to update one’s belief in a hypothesis as you encounter evidence for/against it. It can be formulated as nothing more than that, but it can also be formulated in a different way, such that the formula is just a derivation of a weltanschauung, a bayesian philosphy of what belief and probability are:
We start off with a distribution of all possible worlds, possibly weighted by some likelihood function indicating that some worlds are inherently more likely than others. These are your priors. There are attempts to rigorously formulate and derive this likelihood function from first principles, but such attempts are irrelevant for our purposes.
A possible world is a full description of every thing that happens in the world. It doesn’t matter how this is described—it could be a description of the location of every particle at every moment, or an initial state along with some rules for how to update them. It could follow realistic physics, be a 2d cellular automaton, or have no physics at all and just be a list of states with no rules connecting any of these states. All that matters is that they are fully determined and that we have a likelihood function for each one.
If I know nothing at all about the world I actually live in, I assume that the world is randomly selected from this prior distribution.
As we gather information about the world we rule out some of these possible worlds as not being the world we live in. For example if we look up at the sky and see a yellow sun we rule out all worlds without sight, up and down, a sky, a sun, or where the sun isn’t yellow. When we win the lottery, we remove from the prior all worlds where we didn’t win the lottery.
Under such circumstances Bayes theorem - - can be reformulated as stating that:
Given two properties about the world A and B
Where you know that the world you live in has property B
The chance of the world having property A—i.e. the weighted percentage of all worlds which have property B which also have property A
Is equal to the weighted percentage of all worlds having property A which also have property B, multiplied by the weighted percentage of all possible worlds which have property A divided by the percentage of all possible worlds which have property B.
However on it’s own this view of belief and probability isn’t quite complete. We said that:
if we look up at the sky and see a yellow sun we rule out all worlds without sight, up and down, a sky, a sun, or where the sun isn’t yellow
But there’s two problems with this. Firstly, what if I think I saw the sun, but I made a mistake and saw the moon? Or what if I’m actually color blind or dreaming? How do we account for the fact that I can’t really be certain that anything I think is true is really true?
Secondly I said we can rule out all the worlds where the sun isn’t yellow. But given a world with a billion stars, a few of which are yellow, which one is the sun? How I do decide if a world matches the criteria “having a yellow sun”?
If these sound like stupid and pedantic questions to you, then great—you’ve probably already worked out what the real formulation should be! But it’s important to state it clearly.
We can’t really be certain of anything about the actual world. All we can ever know for certain is that we are a blob of consciousness which at the moment is experiencing various things—sights, tastes, sensations, memories, thoughts, feelings, etc. We can’t know if our experiences match up to some reality, or we’re in a simulation, or we’re dreaming or we’re even a boltzmann brain.
Consider our prior of all the possible worlds again. Within a tiny percentage of these possible worlds there are blobs of experienceness that happen to exactly match ours. Some worlds will even by some crazy chance have multiple such experiences.
Given that all we know is we are one of those blobs of experience, we should assume that we are randomly distributed from among those blobs. As our experiences change with every moment, so does the set of blobs that we could be.
An entity undergoing a subjective experience should reason as if they are randomly selected from the distribution of all possible entities undergoing that exact same subjective experience.
This ends up being mathematically equivalent to the SIA, but to me it feels like it’s on much firmer foundations. Instead of reasoning about the hypothetical (I could have been a cat, but wasn’t—what’s the chances of that?) it reasons about what we actually know, and asks how many things would know exactly the same stuff as we do, and assumes we are one of them.
Practical differences between the SIA and SSA
Any way the wind blows doesn’t really matter to me
The predictions made by the SIA and SSA are wildly different, and test-ably so. For example, are most mammals conscious?
It’s estimated that there are 130 billion mammals in the worlds. Lets assume that at least that number have existed at all points in time for the last 10 million years. Let’s assume the average mammal lives 10 years. Then in total some 100 million billion mammals have existed.
On the other hand only 100 billion humans or so have ever existed. If we are drawn randomly from the set of all conscious beings, the chance that we happened to be a human if all mammals are conscious is 1 millionth that if no mammals are conscious. Hence the SSA would seem to predict extremely strongly that most mammals are not conscious.
However increasingly evidence shows that they do seem to be conscious. Either the million to one chance happened, or mammals are not in fact conscious despite the evidence, or there was something wrong with our reasoning.
For similar reasons the SSA would predict that we are less likely to see aliens than the SIA. According to the SIA, the majority of all experiences that happen to exactly match our own will be in worlds with the most experiences in total, simple because there’s more opportunities for that to happen. Under the SSA that’s exactly counter-balanced by the fact that we’re less likely to be that particular experience in a large world than a small one—the percentage of all beings that are identical to you is the same irrelevant of how large the world is and so the SSA provides no evidence at all as to how large the world you live in is.
So there’s a lot of predictions that would allow us to distinguish between the SIA and the SSA. Except...
We most probably live in a multiverse. Depending on the exact nature of the multiverse, almost all possible worlds that contain experiences equal to our own exist. Therefore the SSA will predict that we’re randomly selected from all observers across all universes in the multiverse, whilst the SIA will predict that we’re randomly distributed from all observers across all possible worlds. It’s not immediately obvious what the differences there are between the two distributions that would allow us to distinguish between them.
I think that that first bayes equation in the SSA section is supposed to be something more like this: (.5 x 10^-24) / (.5 x 10^-24 + .5 x 10^-11) = 10^-13 (sorry I don’t know how to enter equations here). The errors are that that first exponent in the numerator should be −24, not −11, and the answer is just 10^-13 or 1 x 10^-13, not 1 − 10^-13.
That was meant to be the chance of P2, not P1. Fixed now, thanks!