Breaking the SIA with an exponentially-Sleeping Beauty
In this post I’d like to construct an extension to the Sleeping Beauty problem which is unsolvable from an thirder/SIA perspective. The core of the construction is having the number of wakings Beauty will experience follow a distribution with an infinite mean. This leads to Beauty being unable to assign normalized probability values when reasoning under SIA. Given this forum’s fascination with anthropic reasoning and this problem, this may be of interest.
The original Sleeping Beauty problem
The following is a summary of the thirder/SIA approach to the original Sleeping Beauty problem. From what I can tell, this forum is very well versed here, so I won’t go into too much background detail.
In the original problem, we have the following two steps for the experimenter:
Flip a coin. If heads, record the value . If tails, record the value .
Put Beauty to sleep [on Sunday]. She will be woken once on each of the following days, with memory erased between wakings. (i.e. once if heads, twice if tails)
Let be the proposition “I am awake and today is the th day since the experiment began”. (I.e. is “I am awake and it is Monday”; is “I am awake and it is Tuesday”.) Also let be the proposition “I am awake”, in the sense that .
The problem is then to give Beauty a value to assign to the conditional probability of heads given that she has been awoken, i.e. to
.
One version of the thirder/SIA calculation for this value then proceeds as follows.
Where the steps are respectively due to:
Bayes’ law.
That and that , are disjoint propositions.
For the numerator, that is impossible given (i.e. we cannot wake up on Tuesday if the coin came up heads). For the denominator, this is the part of the SIA assumption: given that the coin came up tails, and we cannot distinguish a waking on Monday or Tuesday, we must put equal weight on both possibilities.
Bayes’ law.
Following SIA/thirder reasoning, we put equal weight on heads or tails given that we are waking on Monday. I leave the subtler point of justifying this assignment to the many expositions of thirder/SIA reasoning.
Combining this determined ratio with the fact that , we must have . This of course is what gives proponents of the above calculation the name “thirders”.
A new construction: Exponentially-Sleeping Beauty problem
Now suppose we extend the experiment as follows. The experimenter will:
Sample an integer according to the distribution for . Perhaps the experimenter implements this by flipping a coin until the first time it comes up tails, and then recording the number of heads.
Wake up Beauty times.
Once again, let be the proposition “I am awake” and be the proposition “I am awake and today is the th day/waking-time since the start of the experiment”. What value should a waking Beauty now assign to e.g. the probability ?
We will see that in this new problem, assigning such a value will be problematic. Following the thirder/SIA approach to the original problem, we calculate the ratio
Where the steps are resp. due to:
Bayes’ law/defn. of conditional probability
We have and , so that . These are mutually exclusive/disjoint propositions, so we can transform into a sum.
Following thirder/SIA reasoning, given that for some , we must assign equal probability for each of the possible days that we can awake, i.e. for all .
(Bayes’ law.)
Following thirder/SIA reasoning, given only that it is “Monday”, awoken Beauty’s conditional probability on the outcome of the experimenter’s coin tosses is the same as what is was before she went to sleep.
Hence we have shown that following thirder/SIA reasoning, exponentially-sleeping Beauty must assign the probability value ratio for every pair, hence for all .
This is a problem. We have an infinite collection of mutually-exclusive probabilities which we have proven under SIA reasoning to all be of equal value. It is impossible to satisfy this while having our probabilities sum to unity as in .
Wat does dis mean; how should Beauty assign probabilities here; is this a problem for the SIA?
Notes
This construction requires a universe which is infinite in either time or space, in order to have the volume to support a potentially unlimited number of Beauty awakenings. However, we should note that this assumption can be somewhat weakened: If even the expected value of Beauty’s probability distribution over her number of universe-supported wakings is infinite, then the calculation can still go through to a point where we cannot assign probabilities under SIA reasoning.
Alternate constructions with some interesting properties can be realized by having Beauty wake , or times.
I want to express this problem in practical terms. Please tell me if I am misunderstanding anything.
The original sleeping beauty problem can be carried out like this: wake up beauty once, then toss a coin. If it turns out to be Tails wake her up again with a memory wipe.
The exponential version can be carried out just like the original. Except after waking her up the second time, toss another coin. If Tails, wake her up 2 more times. After which toss another coin. If tails, wake her up 4 more times. Then 8, 16, etc. If we get Heads in any iteration the experiment stops without any additional awakenings or tosses. (All the coin toss could be carried out at the very beginning of the experiment too, just like the original problem.)
When woken up in the experiment, ask beauty: what is the probability that there are/will be, say, 3 Tails in total? Halfers would say it’s simply (1/2)^(3+1) because no new information. SIA supporters could not give any answer since they would consider all possible numbers of Tails equal-probable up to infinity.
I think it is a valid problem for SIA. But doubt this will change the mind of SIA supporters.
My position on anthropics is treating the first-person perspective (indexical like I or Now) like inherently understood primitive concepts, Not assuming them as random samples from some reference class. This means rejecting self-locating probabilities like “now is the 5th day”, which are used in calculating the probability of k in SIA reasoning.
How is this different from “if I show you a calendar, it will have a 5 on it”, or is that also rejected?
Short answer, also rejected.
Long answer, there is a difference between the current moment defined by the first-person perspective versus an objectively defined moment. This is the same as the difference between the first-person versus the physical person. The whole “Is ‘I exist’ meaningful evidence” debate is caused by mixing the two meanings and randomly switching their use in reasoning.
I suppose your question is asking “If I check the calendar now (as the current moment), what is the probability it will show 5.” Then there is no way to assign a probability to it.
If you define the moment some other way. E.g. “checking the calendar on a randomly selected awakening in the experiment, ” then of course there is a probability.
Regarding the first-person perspective as a random sample is the assumption I argue against. But that is something both SSA and SIA endorses. Which allows them to assign values to self-locating probabilities. For the above two questions, SSA and SIA will consider them the same question. They however would argue what is the correct selection process.
Yup! You aren’t allowed to put a uniform distribution over an infinitely long set. Doesn’t work outside of anthropics either.
I think a pretty reasonable thing to do is to start penalizing Sleeping Beauties based on how complex the universe with her in it has to be. This is not the same as saying “well 1 is only 1 bit and 100 is like 7 bits, so Beauty #100 is exponentially less probable” because we can compress the number of Sleeping Beauty into the function that takes that number and extrapolates what’s going to happen in the universe. But for really, really, really big number Sleeping Beauties, eventually the iron law that there’s only a limited number of simple numbers catches up to you.
If we modify SIA by weighting on the complexity of the universe, what would be the conclusion for the original sleeping beauty problem? Isn’t the Tail world marginally more complex than the Head world?
Maybe, but this is really hard to evaluate. Because what we’re comparing the complexity of isn’t just the numbers “1” vs “2,” it’s Sleeping Beauty’s entire predictive model of what’s going to happen in the world after waking up in room 1 vs. 2.
It looks like you’ve rediscovered SIA fears (expected) infinity
You are totally right.
Looks like a variant of Saint-Petersburg paradox.