You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic—it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with “Suppose that Sleeping Beauty is a computer program...” or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.
Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says “It depends on how you define probabilities”, or “It depends on what reference class you use”, or “Probabilities can’t be assigned in this problem” is just dodging the question. In real life, you can’t just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don’t have the option of not acting (since no action is itself an action).
Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework—we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.
Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant.
So, now the answer… It’s a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1⁄2 (ie, 1-to-1 odds), since it’s a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.
The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1⁄2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1⁄3.
(Not all of Beauty’s experiences will be independent between awakenings—eg, the colour of the wallpaper may be the same—but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)
The 1⁄3 answer works. Other answers, such as 1⁄2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.
For example, suppose that after wakening, Beauty says that her probability of Heads is 1⁄2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1⁄2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2⁄3 probability of Heads.
But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1⁄2, since it’s a fair coin. Note that if Beauty had instead thought the probability of Heads was 1⁄3 before seeing the phone, she would correctly update to a probability of 1⁄2.
Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1⁄2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.
We also see that 1⁄2 does not work in betting scenarios. I’ll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?
If Beauty thinks that the probability of Heads is 1⁄2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn’t take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = −4. In contrast, if Beauty thinks the probability of Heads is 1⁄3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666… and not take it.
Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the “relevant” information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1⁄3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1⁄2 for Heads.
She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn’t a good argument for that incorrect probability.
You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic—it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with “Suppose that Sleeping Beauty is a computer program...” or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.
Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says “It depends on how you define probabilities”, or “It depends on what reference class you use”, or “Probabilities can’t be assigned in this problem” is just dodging the question. In real life, you can’t just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don’t have the option of not acting (since no action is itself an action).
Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework—we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.
Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant.
So, now the answer… It’s a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1⁄2 (ie, 1-to-1 odds), since it’s a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.
The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1⁄2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1⁄3.
(Not all of Beauty’s experiences will be independent between awakenings—eg, the colour of the wallpaper may be the same—but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)
The 1⁄3 answer works. Other answers, such as 1⁄2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.
For example, suppose that after wakening, Beauty says that her probability of Heads is 1⁄2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1⁄2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2⁄3 probability of Heads.
But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1⁄2, since it’s a fair coin. Note that if Beauty had instead thought the probability of Heads was 1⁄3 before seeing the phone, she would correctly update to a probability of 1⁄2.
Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1⁄2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.
We also see that 1⁄2 does not work in betting scenarios. I’ll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?
If Beauty thinks that the probability of Heads is 1⁄2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn’t take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = −4. In contrast, if Beauty thinks the probability of Heads is 1⁄3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666… and not take it.
Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the “relevant” information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1⁄3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1⁄2 for Heads.
She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn’t a good argument for that incorrect probability.