Monty Hall Sleeping Beauty
A friend referred me to another paper on the Sleeping Beauty problem. It comes down on the side of the halfers.
I didn’t have the patience to finish it, because I think SB is a pointless argument about what “belief” means. If, instead of asking Sleeping Beauty about her “subjective probability”, you asked her to place a bet, or take some action, everyone could agree what the best answer was. That it perplexes people is a sign that they’re talking non-sense, using words without agreeing on their meanings.
But, we can make it more obvious what the argument is about by using a trick that works with the Monty Hall problem: Add more doors. By doors I mean days.
The Monty Hall Sleeping Beauty Problem is then:
On Sunday she’s given a drug that sends her to sleep for a thousand years, and a coin is tossed.
If the coin lands heads, Beauty is awakened and interviewed once.
If the coin comes up tails, she is awakened and interviewed 1,000,000 times.
After each interview, she’s given a drug that makes her fall asleep again and forget she was woken.
Each time she’s woken up, she’s asked, “With what probability do you believe that the coin landed tails?”
The halfer position implies that she should still say 1⁄2 in this scenario.
Does stating it this way make it clearer what the argument is about?
I like Anthropic SB better.
SB has the following rules explained to her on Sunday.
You will be drugged to sleep now.
Then we will flip a coin.
On a heads, you will be shot in the head until dead.
On a tails, you will be woken up tomorrow and asked “What is the probability that the coin landed tails?”
Who still thinks that SB should assign 1⁄2 to the probability that the coin landed heads?
In my video here I look at a lot of the ramifications of SB decisions: https://www.youtube.com/watch?v=aiGOGkBiWEo
What’s relevant here is the frequentist position. Imagine you do the SB experiment a thousand times in a row. If you tell SB “be correct the most often you are asked”, she will behave as a thirder. If you tell SB “be correct in the most experiments”, then she will behave as a halfer. So frequentism no longer converges to a unique subjective probability in the long run.
No; you are asking her two different questions, so it is correct for frequentism to give different answers to the different questions.
Of course. But the two questions are the same outside of anthropic situations; they are two extensions of the underdefined “how often was I right?” Or, if you prefer, the frequentist answer in anthropic situations is dependent on the exact question asked, showing that “anthropic probability” is not a well defined concept.
Usually “Monty Hall”?
Oh, yeah. Too much D&D.
This isn’t a new idea. It’s mentioned in http://www.anthropic-principle.com/preprints/beauty/synthesis.pdf , for instance.
Also, I believe if you read the (detailed) arguments for each side, you’ll find it much harder to reduce them to disagreement over word meaning. Or at least that’s what I remember from when I looked at them.
You want to read http://analysis.oxfordjournals.org/content/60/2/143 and http://analysis.oxfordjournals.org/content/61/3/171
Last two links are paywalled.
https://www.princeton.edu/~adame/papers/sleeping/sleeping.pdf and http://fitelson.org/probability/lewis_sb.pdf
Last link is paywalled.
It’s not clear to me exactly what your position is, so I will assume you’re a thirder. If this is not the case and I have misinterpreted your position, feel free to correct me at will.
I disagree with you because I think that “subjective probability” is indeed what one should be asking, because only in this way one can believe different things depending on the bet made.
For example, let me attack your Monty-halled SB:
in an urn there are two white balls and a red one: one is extracted, if the ball is white the SB is awaken and interviewed once, if red is extracted then she’s interviewed one million times;
the sleeping beauty must decide beforehand whether she wants to bet on red or white. If she’s correct then she wins a million dollar.
If the thirder was always the correct answer, one could calculate that the red branch gets beforehand a probability of .999999, so the SB would always bet on the red and lose on average on the ‘halfer’ beauty.
You’ve changed the problem to suit your answer.
Yes, but to be clear, my ‘answer’ is that theres’ no universal right answer: whenever the question asked is about the single anthropic position, thirders are correct, when it’s about the global structure of the branch they’re in, halfers’ is the correct answer.
The argument would have carried through even if I had not destroyed the symmetry between the branches, because in that case it would have been that both positions would have won on average, and so there was no ‘obviously’ correct answer, but I think this way is clearer, because in the Monty Hall version one of the branch gets almost all probability mass.
Only one of those questions is asked in the problem proper. The other is the product of a poor rephrasing or somebody seeking the question to which their incorrect answer ceases to be incorrect.
I don’t know / care to track the problem as it was originally formulated. If it’s as you say so, then I wholeheartedly agree that the correct answer is 1⁄3.
It’s just nice to be able to reason correctly about this kind of anthropic questions and to be aware that the answer changes (which is not a given in non Bayesian takes on probability).
AFAICT, the argument has nothing to do with the problem at all, and everything to do with defending “your” side.
My initial response was “halfer,” the naively obvious answer. Then, knowing that these problems always have a trick, I examined the precise phrasing of the question more closely, and “thirder” is clearly correct. That’s the -point- of the problem, and what makes it interesting—it’s designed to make you come to the wrong conclusion using naive logic, for the pure purpose of showing that the naive logic is, well, naive. We wouldn’t be discussing the problem if it didn’t have that property—if the naive solution wasn’t wrong, it would be a completely uninteresting problem.
Spend some time guessing the teacher’s password, people, before you marry your answer, and then proceed to spend hours trying to invent a novel reason why your answer must be the correct one. The problem exists -because- it defies your expectations. Instead of trying to justify your expectations, take a look at what the problem is trying to teach you, because that is what it was designed to do.
TLDR? The Sleeping Beauty problem was designed to impart a lesson about naive logic. Quit fighting the lesson.
Care to elaborate?
You just woke up. You don’t know if the coin was head or tails, and you have no further information. You knew it was 50-50 before going to sleep. No new information, no new answer. I don’t see what the “twist” is. Monty Hall, there’s another information input—the door the host opens never has the prize behind it.
Or, another perspective : a perfect erasure of someone’s memories and restoration of their body to the pre-event state is exactly the same as if the event in question never occurred. So delete the 1 million from consideration. It’s just 1 interview post waking. Heads or Tails?
You’ve woken up. That, itself, is information.
Your latter paragraph, like so many 50⁄50 justifications, replaces the actual question with a different one.