Antropical Probabilities Are Fully Explained by Difference in Possible Outcomes
This is the third post in my series on Anthropics. The previous one is Conservation of Expected Evidence and Random Sampling in Anthropics. The next one is Anthropical Paradoxes are Paradoxes of Probability Theory.
Introduction
In the previous post I argued that all you need to do anthropics right is to make sure that you follow the Conservation of Expected Evidence. That there is no first-person magic. That what matters is the causal structure of the setting—whether there is random sampling or not and what kind of evidence can in principle be observed.
However, this can be mistakenly interpreted as if there never should be any difference whatsoever between first and third-person perspective. That learning that you exist should give you exactly the same information as for another person to learn that you exist.
This is not the case. There can be valid differences between people perspectives. But they have to be grounded in the differences in sampling and expected evidence and has nothing to do with metaphysics.
Beauty’s disagreement with a Visitor
Let’s investigate Incubator Sleeping Beauty with a Visitor (ISBV):
You are an observer in the Incubator Sleeping Beauty experiment. You do not know the result of a coin toss but you’ve looked into a room, chosen randomly among the two, and seen that there is a Beauty there. What’s the probability that the coin landed Heads? Then you’ve noticed that it’s the Room 1. What’s the probability that the coin landed Heads now?
Let’s start from the first question. There are two rooms and two coin-sides. In one of the four possible outcomes the room is empty. So
On the other hand, if it’s definitely Room 1 then it couldn’t possibly be empty anyway, so no new evidence is observed.
But as we already know, the situation is quite different for the Beauty herself. Her existence doesn’t give her any new information as there is no random sampling for her going on
But being in the Room 1 is twice as likely when the coin is Heads
And seeing a bystander entering her room doesn’t give the beauty any new evidence regarding the result of the coin toss either, as in both Heads and Tails outcomes there is exactly 50% chance that her room is to be visited.
This leads to a seemingly paradoxical situation. In the same experiment the Visitor’s credence for Heads is different from Beautie’s credence.
Visitor: This randomly chosen room isn’t empty. Turns out the probability that the coin is Heads is 1⁄3.
Beauty: Not so from my perspective! I still believe it’s 1⁄2, because neither my existence, nor your visit gave me any relevant new evidence. However, could you check whether it’s Room 1?
Visitor: Let me see… [checks the label on the other side of the door] Oh yes, it is indeed Room 1. And as it would’ve been filled anyway, I agree that the probability for Heads is just 1⁄2.
Beauty: No, it’s not. Now that I know that this is Room 1 I belive that probability for Heads is 2⁄3 as I’m twice as likely to be in Room 1 when the coin is Heads!
Resolving the disagreement
It may seem that someone necessarily has to be wrong here. And if it’s the Beauty, who is correct, then she has to possesses some weird first-person psychic powers giving her access to some, otherwise unobtainable, evidence.
But this is not the case. In fact, both the Beauty and the Visitor are reasoning correctly! Here is a simulation of a repeated experiment written in python. Implementation of incubator() function is taken from here.
Visitor:
coin_guess = []
for n in range(100000):
rooms, coin = incubator()
visitor_room_select = 1 if random.random() >= 0.5 else 2
visitor_sees_any_beauty = visitor_room_select in rooms.values()
if visitor_sees_any_beauty:
coin_guess.append(coin == 'Heads')
coin_guess.count(True)/len(coin_guess) # 0.33294271180120916
Visitor, Room 1:
coin_guess = []
for n in range(100000):
room, coin = incubator()
visitor_room_select = 1 if random.random() >= 0.5 else 2
if visitor_room_select == 1:
visitor_sees_any_beauty = visitor_room_select in rooms.values()
if visitor_sees_any_beauty :
coin_guess.append(coin == 'Heads')
coin_guess.count(True)/len(coin_guess) # 0.5022478869862329
Beauty:
coin_guess = []
for n in range(100000):
rooms, coin = incubator()
visitor_room_select = 1 if random.random() >= 0.5 else 2
visitor_sees_this_beauty = visitor_room_select = rooms['my_room']
if visitor_sees_this_beauty:
coin_guess.append(coin == 'Heads')
coin_guess.count(True)/len(coin_guess) # 0.50111
Beauty, Room 1:
coin_guess = []
for n in range(100000):
rooms, coin = incubator()
visitor_room_select = 1 if random.random() >= 0.5 else 2
if visitor_room_select == 1:
visitor_sees_this_beauty = visitor_room_select == rooms['my_room']
if visitor_sees_this_beauty:
coin_guess.append(coin == 'Heads')
coin_guess.count(True)/len(coin_guess) # 0.663950412781533
How can it be?
When the Visitor looks into the room, there is a possibility to see that it’s empty. Which is not the case for the Beauty herself, as she can’t possibly witness her own absence. Outcome Heads & Room 2 doesn’t exist for her, as it does for the Visitor.
Moreover, on Tails outcome, the Visitor will always see a Beauty in the room. However, for each Beauty created on Tails, there is only 50% chance to see the visitor. As a result, the visitor observes the outcome “any Beauty was seen”, while the Beauty herself observes the outcome “this particular Beauty was seen”. Such difference in observed and possible outcomes naturally leads to different probability estimates.
If this still doesn’t feel intuitive, you can remind yourself why is it possible in principle to guess the result of coin toss better than chance in this setting. Both the Beauty and the Visitor can do it only in some subset of the coin tosses—when they have some extra evidence. And these subsets are different for them.
For the Visitor, it’s when a Room, randomly selected for the visit, contains a Beauty
But not when it’s Room 1:
But for the Beauty it’s when she is in Room 1:
Once again, this isn’t because the Beauty’s first-person perspective as a participant in the experiment has some mystical properties that the Visitor can not duplicate. It’s quite easy to put the Visitor in the exact same epistemic situation. All we need to do is modify the setting of the experiment a bit, so that the possible outcomes for the Visitor were the same as for the Beauty herself. Like that:
You are an observer in the incubator sleeping beauty experiment. You do not know the result of a coin toss but you were brought into a room randomly selected among the ones where there is a Beauty. What’s the probability that the coin landed Heads? What’s the probability that the coin landed Heads if you know that this is Room 1?
Now, due to the experiment design, the visitor was certain to expect to see a Beauty in the Room.
But information about whether they were brought in Room 1 becomes valuable. After all it happens twice as likely on Heads than on Tails
No need to trouble the Beauty
At this moment we do not even need a Beauty in a room. We can put a mannequin or any other object, or just mark a room in any way that the Visitor will recognize. The same disagreement that the Beauty had with the Visitor can be recreated with two Visitors with different possible outcomes: one was brought to a random room and seen that it was marked, and the other one, who was supposed to be brought in one of the marked rooms in the first place.
There is nothing special about Beauty’s perspective. No pshychic powers, no cosciousness magic. All the weirdness of her creation process just contributes to what possible outcomes she is able to have. In fact, she is a Visitor who was supposed to be brought to a marked Room and who can’t notice not being brought into the Room. Everything probability relevant is determined by these possible outcomes and so there is no need to talk about anything else.
But if everything is so simple, if anthropics is just basic probability theory with no special case for consciousness why do we keep encountering anthropic paradoxes? This is a fair question and my next post will be focused on answering it.
Next post in the series is Anthropical Paradoxes are Paradoxes of Probability Theory
- Anthropical Paradoxes are Paradoxes of Probability Theory by 6 Dec 2023 8:16 UTC; 52 points) (
- Conservation of Expected Evidence and Random Sampling in Anthropics by 3 Sep 2023 6:55 UTC; 9 points) (
- 1 Feb 2024 16:27 UTC; 1 point) 's comment on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? by (
- 30 Jan 2024 18:02 UTC; 1 point) 's comment on The Perspective-based Explanation to the Reflective Inconsistency Paradox by (
I could suggest a similar experiment which also illustrates difference between probabilities from different points of view and can be replicated without God and incubators. I toss a coin and if heads says `hello’ to a random person from a large group. If tails, I say this to two people. From my point of view chances to observe the coin is heads are 0.5. For the outside people, chances that I said Hello|Heads are only 1⁄3.
It is an observation selection effect (a better therm than ‘anthropics’). Outside people can observe Tails twice and that is why they get different estimate.
It’s just the simple fact that conditional probability of an event can be different from unconditional one.
Before you toss the coin you can reason only based on priors and therefore your credence is 1⁄2. But when a person hears “Hello”, they’ve observed an event “I was selected from a large crowd” which happens twice as likely when the coin is Tails, therefore they can update on this information and get their credence in Tails up to 2⁄3.
This is exactly as surprising as the fact that after you tossed the coin and observed that it’s Heads suddenly your credence in Heads is 100%, even though before the coin toss it was merely 50%.
‘Observation selection effect’ is another name for ‘conditional probability’ - the probability of an event X, given that I observe it at all or observe it several times.
By the way, there’s an interesting observation: my probability estimate before a coin toss is an objective probability that describes the property of the coin. However, after the coin toss, it becomes my credence that this specific toss landed Heads. We expect these probabilities to coincide.
If I get partial information about the result of the toss (maybe I heard a sound that is more likely to occur during Heads), I can update my credence about the result of that given toss. The obvious question is: can Sleeping Beauty update her credence before learning that it is Monday?
Don’t say “objective probability”—it’s a road straight to confusion. Probabilities represent your knowledge state. Before the coin is tossed you are indifferent between two states of the coin, and therefore have 1⁄2 credence.
After the coin is tossed, if you’ve observed the outcome, you get 1 credence, if you received some circumstantial evidence, you update based on it, and if you didn’t observe anything relevant, you keep your initial credence.
If she observes some event that is more likely to happen in the iterations of the experiment where the coin is Tails than in an iterations of the experiment where the coin is Heads than she lawfully can update her credence.
As the conditions of the experiment restrict it—she, threfore, doesn’t update.
And of course, she shouldn’t update, upon learning that it’s Monday either. After all, Monday awakening happens with 100% probability on both Heads and Tails outcomes of the coin toss.
I think that what I call ’objective probability” represent physical property of the coin before the toss, and also that before the toss I can’t get any evidence about the result the toss. In MWI it would be mean split of timelines. While it is numerically equal to credence about a concrete toss result, there is a difference and SB can be used to illustrate it.
Say that in each case where a Beauty and a Visitor meet each other, a wild Bookmaker appears and offers each of them a chance to bet on what was the outcome of the coinflip. If they have different subjective odds then they will choose to make different bets (depending on the odds offered) and one will be more profitable than the other—so in that sense at least one of them is wrong. Or am I missing something?
Each of them gets most expected profit when betting on their own odds. My python code snippets are about basically this scenario.
When a Beauty meets a Visitor in Room 1 she is right about coin being heads 2⁄3 of times. But Visitor meeting Beaty in Room 1 can guess Heads only 1⁄2 of times. That’s because on a repeated experiment, Visitor meets Beauties (there are two of them on Tails) more often than any specific Beauty meets Visitor—they have different possible outcomes, thus different probability estimates and different favourable betting odds.
We can, in principle, make a betting scheme to which only the Visitor’s (or, likewise, only the Beauty’s) probability estimate is relevant. I’ll talk more about it in the next post.