I’d like to ask a question about the SleepingBeauty problem for someone that thinks that 1⁄2 is an acceptable answer.
Suppose the coin isn’t flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?
If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped. From this we can learn that P(Monday) = 2 * P(Heads), by the calculation below.
This is inconsistent with the halfer position, because if P(Heads) = 1⁄2, then P(Monday) = 2 * 1⁄2 = 1.
EDIT: The calculation is that P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) = 1⁄2 P(Monday) + 0 P(Tuesday), so P(Monday) = 2 * P(Heads).
I think that 1⁄2 is an acceptable answer, and in fact the only correct answer. Basically 1⁄2 corresponds to SSA, and 1⁄3 to SIA; and in my opinion SSA is right, and SIA is wrong.
We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell.
Then we question all of the persons: “Do you think you are in the first cell, or the second?” “Do you think the coin landed heads, or tails?” To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped.
Estimates based on SSA:
P(H) = .5
P(T) = .5
P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second]
P(2nd cell) = .25 [likewise]
P(H | 1st cell) = 2⁄3 [from above]
P(T | 1st cell) = 1⁄3 [likewise]
P(H | 2nd cell) = 0
P(T | 2nd cell) = 1
Your mistake is in the assumption that “P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped.” The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.
I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about.
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.
I didn’t say it did. I said that P(Heads | Monday) = P(Tails | Monday) = 1⁄2, because it’s determined by a fair coin flip that’s yet to happen. This is in contrast to the standard halfer position, where P(Heads | Monday) > 1⁄2, and P(Tails | Monday) < 1⁄2. Everyone agrees that P(Heads | Monday) + P(Tails | Monday) = 1.
Or are you disagreeing with the calculation?
P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) is just Baye’s theorem.
P(Heads | Tuesday) = 0, because if Beauty is awake on Tuesday then the coin must have landed tails.
P(Heads | Monday) = 1⁄2 by the initial reasoning.
Then P(Monday) = 2 * P(Heads) by a teeny amount of algebra.
The probability is 1⁄3 per awakening and 1⁄2 per experiment.
P(Heads | Monday) = 1⁄2
P(Tails | Monday) = 1⁄2
P(Heads | Tuesday) = 0
P(Tails | Tuesday) = 1
Per-experiment:
P(Monday) = 1
P(Tuesday) = 1⁄2
P(Heads) = 1⁄2, P(Tails) = 1⁄2
Per-awakening:
P(Monday) = 2⁄3
P(Tuesday) = 1⁄3
P(Heads) = 1⁄3, P(Tails) = 2⁄3
I don’t see anything in either of those links claiming that P(Heads | Monday) > 1⁄2. I assume that your reasoning to get that is something like “P(Heads | Tuesday) is less than P(Heads), so it follows that P(Heads | Monday) is greater than P(Heads).” However, if you’re calculating per-experiment, Monday and Tuesday are not mutually exclusive, so this reasoning doesn’t work. (If you’re calculating per-awakening, P(Heads) isn’t 1⁄2 anyway.)
Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads. Suppose that on each awakening, the experimenter flips a second coin, and if the second coin lands heads, the experimenter tells Beauty what day it is, and does not do so if it is tails.
If Beauty is told that it is Tuesday, this is evidence (conclusive in fact) that the first coin landed tails. So conservation of expected evidence means that if she is told that it is Monday, she should treat this as evidence that the first coin will land heads.
Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads.
More likely than what?
Using per-awakening probabilities, ithe probability of heads without this information is 1⁄3.
The new information makes heads more likely than the 1⁄3 that the probability would be without the new information. It doesn’t make it more likely than 1⁄2.
I misplaced that comment. It was not a response to yours.
More likely than what?
More likely than .5. In fact I am saying the probability of getting heads is 2⁄3 after being told that it is Monday.
Using per-awakening probabilities, ithe probability of heads without this information is 1⁄3.
This is a frequentist definition of probability. I am using probability as a subjective degree of belief, where being almost certain that something is so means assigning a probability near 1, being almost certain that it is not means assigning a probability near 0, and being completely unsure means .5.
Here is how this works. If I am sleeping Beauty, on every awakening I am subjectively in the same condition. I am completely unsure whether the coin landed/will land heads or tails. So the probability of heads is .5, and the probability of tails is .5.
What is the subjective probability that it is Monday, and what is the subjective probability it is Tuesday? It is easier to understand if you consider the extreme form. Let’s say that if the coin lands tails, I will be woken up 1,000,000 times. I will be quite surprised if I am told that it is day #500,000, or any other easily definable number. So my degree of belief that it is day #500,000 has to be quite low. On the other hand, if I am told that it is the first day, that will be quite unsurprising. But it will be unsurprising mainly because there is a 50% chance that will be the only awakening anyway. This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50% -- 50% of this is from the possibility that the coin landed heads, and a tiny bit more from the possibility that it landed tails but it is still the first day.
When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday. In fact, before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails. And when I am told that it is in fact Monday, then I think there is a chance of 2⁄3, i.e. 50⁄75, that the coin will land heads.
This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50%… When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday.
In the non-extreme form, the chance of being Monday is 2⁄3 and the chance of being Tuesday is 1⁄3. 2⁄3 is indeed less surprising than 1⁄3, so your reasoning is correct.
before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails
Before being told anything, you should estimate a 2⁄3 chance that it’s Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely. Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely. Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%.
And when I am told that it is in fact Monday, then I think there is a chance of 2⁄3, i.e. 50⁄75, that the coin will land heads.
When you are told that it is Monday, the chance is not 50⁄75, it’s (1/3) / (2/3) = 50%. Being told that it is Monday does increase the probability that the result is heads; however, it increases it from 1⁄3 → 1⁄2, not from 1⁄2 → 2⁄3.
Before being told anything, you should estimate a 2⁄3 chance that it’s Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely.
I disagree that these situations are equally likely. We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day. So obviously these two situations do not seem equally likely to me. And in particular, it seems equally likely to me that the coin was or will be heads, and that it was or will be tails. Going back to the non-extreme form, this directly implies that it seems half as likely to me that it is Monday and that the coin will be tails, as it is that it is Monday and that the coin will be heads. This results in my estimate of a 75% chance that it is Monday.
Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely.
I am not calculating “probability per awakening”, but calculating in the way indicated above, which does indeed make Tails/Monday half as likely as heads/Monday.
Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%.
I am not asking about the probability that the situation as a whole will somewhere or other contain tails/Monday; this has a probability of 50%, just like the corresponding claim about heads/Monday. I am being asked in a concrete situation, “do you think it is Monday?” And I am less sure it is Monday if the coin is going to be tails, because in that situation I will not be able to distinguish my situation from Tuesday. And this is surely the case even when I am woken up both on Monday and Tuesday. It will just happen twice that I am less sure it is Monday.
And based on the above reasoning, being told that it is Monday does indeed lead me to expect that the coin will land heads, with a probability of 2⁄3.
We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day.
You should not be more surprised in that situation. The more days there are, the more that the extra tails awakenings push down the probability of heads. With 500000 awakenings, the probability gets pushed down by a lot. Now heads is 1⁄500001 per-awakening probability, same as tails-day-1 and tails-day-500000
You are claiming that if I will be wake up 500,000 times if the coin lands tails, I should be virtually certain a priori that the coin will land tails. I am not; I would not be surprised at all if it landed heads. In fact, as I have been saying, the setup does not make me expect tails in any way. So at the start the probability remains 50% heads, 50% tails.
I do not. I mean reporting my opinion when someone asks, “Do you think the coin landed, heads, or tails?” I will truthfully respond that I have no idea. The fact that I would be woken up multiple times if it landed tails, did not make it any harder for the coin to land heads.
I’d recommend distinguishing between the probability that the coin landed heads (which happens exactly once), and the probability that, if you were to plan to peak you would see heads (which would happen on average 250,000 times).
The problem is that you are counting frequencies, and I am not. It is true that if you run the experiment many times, my estimate will change, from the very moment that I know that the experiment will be run many times.
But if we are going to run the experiment only once, then even if I plan to peek, I would expect with 50% probability to see heads. That does not mean “per awakening” or any other method of counting. It means that if I saw heads, I would say, “Not surprising; that had a 50% chance of happening.” I would not say, “What an incredible coincidence!!!!”
Thank you for walking me through this; I’m having a very hard time seeing the other perspective here.
I understand that P(Monday) is ambiguous. I meant to refer to “the probability that the current day, as Beauty is currently being interviewed, is Monday”. Regardless of Beauty’s perspective, she can ask weather the current day is Monday or Tuesday, and she does know that it is not currently both. And she can ask what the probability that the coin landed tails given that the current day is Tuesday, etc. Yes? Given that, I’m not seeing what part of my reasoning doesn’t work if you replace each instance of “Monday” with “IsCurrentlyMonday”.
I meant to refer to “the probability that the current day, as Beauty is currently being interviewed, is Monday”.
What you just described is a per-awakening probability. Per-awakening, P(Heads) = 1⁄3, so the proof that P(Heads | Monday) > 1⁄2 actually only proves that P(Heads | Monday) > 1⁄3, which is true since 1⁄2 > 1⁄3.
Sorry, you lost me completely. I didn’t prove that P(Heads | Monday) > 1⁄2 at all.
Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?
The experiment is unchanged by delaying the coin flip until Monday evening.
If the current day is Monday, then the coin is equally likely to land heads or tails, because it is a fair coin that is about to be flipped. Thus P(Heads | CurrentlyMonday) = 1⁄2.
By Bayes’ theorem, which is applicable because it cannot currently be both Monday and Tuesday:
Sorry, you lost me completely. I didn’t prove that P(Heads | Monday) > 1⁄2 at all.
You had said:
This is in contrast to the standard halfer position, where P(Heads | Monday) > 1⁄2
Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated.
Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?
“CurrentlyMonday” as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1⁄3, which indeed is less than 1⁄2.
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
(If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like “the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday”, and step 3 won’t work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)
“CurrentlyMonday” as you have defined it is a per-awakening probability
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
Neither of your links to the halfer position shows anyone claiming that.
Sorry, I wrote that without thinking much. I’ve seen that position, but it’s definitely not the standard halfer position. (It seems to be entirelyuseless’ position, if I’m not mistaken.)
The per-experiment probabilities you give make perfect sense to me: they’re the probabilities you have before you condition on the fact that you’re Beauty in an interview, and they’re the probabilities from which I derived the “per-awakening” probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1⁄2; thus they’re all equally likely, though that’s not the most rigorous reasoning).
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different?
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
P(X) = P(X | A) * P(A) + P(X | B) * P(B)
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event.
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
My understanding is that P depends only on your knowledge and priors.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up.
This is a SIA idea, and it’s wrong. There’s nothing to condition on because there’s no new information, just as there’s no new information when you find that you exist. You can never find yourself in a position where you don’t exist or where you’re not awake (assuming awake here is the same as being conscious.)
Please don’t make statements like this unless you really understand the other person’s position (can you guess how I will respond?). For instance, notice that I haven’t ever said that the halfer position is wrong.
There’s nothing to condition on because there’s no new information
This is just a restatement of SSA. By SIA there is new information, since you’re more likely to be one of a larger set of people.
just as there’s no new information when you find that you exist
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I’m curious how you’d respond to the question I asked above. It’s in the post with “So your probabilities aren’t grounded in frequency&utility.”
For instance, notice that I haven’t ever said that the halfer position is wrong.
And I didn’t say (or even mean to say) that your position is wrong. I said the SIA idea is wrong.
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
You can learn something from the fact that you are alive, as in cases like this. But you don’t learn anything from it in the cases where the disagreement between SSA and SIA comes up. I’ll say more about this in replying to the other comments, but for the moment, consider this thought experiment:
Suppose that you wake up tomorrow in your friend Tom’s body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day.
Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don’t even have a way to know it isn’t happening. The world would seem the same to everyone, including to you and him, if it were the case.
So consider another situation: you don’t wake up tomorrow at all. Someone else wakes up in your place with your memories and personality.
Once again, this situation is either meaningless, or no one, including you, has a way to know it didn’t already happen yesterday.
So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened.
The application of this to SSA and SIA should be evident.
I’d like to ask a question about the Sleeping Beauty problem for someone that thinks that 1⁄2 is an acceptable answer.
Suppose the coin isn’t flipped until after the interview on Monday, and Beauty is asked the probability that the coin has or will land heads. Does this change the problem, even though Beauty is woken up on Monday regardless? It seems to me to obviously be equivalent, but perhaps other people disagree?
If you accept that these problems are equivalent, then you know that P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped. From this we can learn that P(Monday) = 2 * P(Heads), by the calculation below.
This is inconsistent with the halfer position, because if P(Heads) = 1⁄2, then P(Monday) = 2 * 1⁄2 = 1.
EDIT: The calculation is that P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) = 1⁄2 P(Monday) + 0 P(Tuesday), so P(Monday) = 2 * P(Heads).
I think that 1⁄2 is an acceptable answer, and in fact the only correct answer. Basically 1⁄2 corresponds to SSA, and 1⁄3 to SIA; and in my opinion SSA is right, and SIA is wrong.
We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell.
Then we question all of the persons: “Do you think you are in the first cell, or the second?” “Do you think the coin landed heads, or tails?” To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped.
Estimates based on SSA:
P(H) = .5
P(T) = .5
P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second]
P(2nd cell) = .25 [likewise]
P(H | 1st cell) = 2⁄3 [from above]
P(T | 1st cell) = 1⁄3 [likewise]
P(H | 2nd cell) = 0
P(T | 2nd cell) = 1
Your mistake is in the assumption that “P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped.” The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.
I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.
Adding P(Heads | Monday) and P(Tails | Monday) doesn’t give you P(Monday), it gives you P(1 | Monday).
I didn’t say it did. I said that P(Heads | Monday) = P(Tails | Monday) = 1⁄2, because it’s determined by a fair coin flip that’s yet to happen. This is in contrast to the standard halfer position, where P(Heads | Monday) > 1⁄2, and P(Tails | Monday) < 1⁄2. Everyone agrees that P(Heads | Monday) + P(Tails | Monday) = 1.
Or are you disagreeing with the calculation?
P(Heads) = P(Monday) P(Heads | Monday) + P(Tuesday) P(Heads | Tuesday) is just Baye’s theorem.
P(Heads | Tuesday) = 0, because if Beauty is awake on Tuesday then the coin must have landed tails.
P(Heads | Monday) = 1⁄2 by the initial reasoning.
Then P(Monday) = 2 * P(Heads) by a teeny amount of algebra.
The probability is 1⁄3 per awakening and 1⁄2 per experiment.
P(Heads | Monday) = 1⁄2
P(Tails | Monday) = 1⁄2
P(Heads | Tuesday) = 0
P(Tails | Tuesday) = 1
Per-experiment:
P(Monday) = 1
P(Tuesday) = 1⁄2
P(Heads) = 1⁄2, P(Tails) = 1⁄2
Per-awakening:
P(Monday) = 2⁄3
P(Tuesday) = 1⁄3
P(Heads) = 1⁄3, P(Tails) = 2⁄3
I don’t see anything in either of those links claiming that P(Heads | Monday) > 1⁄2. I assume that your reasoning to get that is something like “P(Heads | Tuesday) is less than P(Heads), so it follows that P(Heads | Monday) is greater than P(Heads).” However, if you’re calculating per-experiment, Monday and Tuesday are not mutually exclusive, so this reasoning doesn’t work. (If you’re calculating per-awakening, P(Heads) isn’t 1⁄2 anyway.)
Some additional support for the apparently unreasonable conclusion that if it is Monday, it is more likely that the coin will land heads. Suppose that on each awakening, the experimenter flips a second coin, and if the second coin lands heads, the experimenter tells Beauty what day it is, and does not do so if it is tails.
If Beauty is told that it is Tuesday, this is evidence (conclusive in fact) that the first coin landed tails. So conservation of expected evidence means that if she is told that it is Monday, she should treat this as evidence that the first coin will land heads.
More likely than what?
Using per-awakening probabilities, ithe probability of heads without this information is 1⁄3.
The new information makes heads more likely than the 1⁄3 that the probability would be without the new information. It doesn’t make it more likely than 1⁄2.
I misplaced that comment. It was not a response to yours.
More likely than .5. In fact I am saying the probability of getting heads is 2⁄3 after being told that it is Monday.
This is a frequentist definition of probability. I am using probability as a subjective degree of belief, where being almost certain that something is so means assigning a probability near 1, being almost certain that it is not means assigning a probability near 0, and being completely unsure means .5.
Here is how this works. If I am sleeping Beauty, on every awakening I am subjectively in the same condition. I am completely unsure whether the coin landed/will land heads or tails. So the probability of heads is .5, and the probability of tails is .5.
What is the subjective probability that it is Monday, and what is the subjective probability it is Tuesday? It is easier to understand if you consider the extreme form. Let’s say that if the coin lands tails, I will be woken up 1,000,000 times. I will be quite surprised if I am told that it is day #500,000, or any other easily definable number. So my degree of belief that it is day #500,000 has to be quite low. On the other hand, if I am told that it is the first day, that will be quite unsurprising. But it will be unsurprising mainly because there is a 50% chance that will be the only awakening anyway. This tells me that before I am told what day it is, my estimate of the probability that it is the first day is a tiny bit more than 50% -- 50% of this is from the possibility that the coin landed heads, and a tiny bit more from the possibility that it landed tails but it is still the first day.
When we transition to the non-extreme form, being Monday is still less surprising than being Tuesday. In fact, before being told anything, I estimate a chance of 75% that it is Monday -- 50% from the coin landing heads, and another 25% from the coin landing tails. And when I am told that it is in fact Monday, then I think there is a chance of 2⁄3, i.e. 50⁄75, that the coin will land heads.
In the non-extreme form, the chance of being Monday is 2⁄3 and the chance of being Tuesday is 1⁄3. 2⁄3 is indeed less surprising than 1⁄3, so your reasoning is correct.
Before being told anything, you should estimate a 2⁄3 chance that it’s Monday (not a 75% chance). There are three possibilities: heads/Monday, tails/Monday, and tails/Tuesday, all of which are equally likely. Because tails results in two awakenings, and you are calculating probability per awakening, that boosts the probability of tails, so it would be incorrect to put 50% on heads/Monday and 25% on tails/Monday. Tails/Monday is not half as likely as heads/Monday; it is equally likely. Only in the scenario where you were woken up either on Monday or Tuesday, but not both, would the probability of tails/Monday be 25%.
When you are told that it is Monday, the chance is not 50⁄75, it’s (1/3) / (2/3) = 50%. Being told that it is Monday does increase the probability that the result is heads; however, it increases it from 1⁄3 → 1⁄2, not from 1⁄2 → 2⁄3.
I disagree that these situations are equally likely. We can understand it better by taking the extreme example. I will be much more surprised to hear that the coin was tails and that we are now at day #500,000, then that the coin was heads and that it is the first day. So obviously these two situations do not seem equally likely to me. And in particular, it seems equally likely to me that the coin was or will be heads, and that it was or will be tails. Going back to the non-extreme form, this directly implies that it seems half as likely to me that it is Monday and that the coin will be tails, as it is that it is Monday and that the coin will be heads. This results in my estimate of a 75% chance that it is Monday.
I am not calculating “probability per awakening”, but calculating in the way indicated above, which does indeed make Tails/Monday half as likely as heads/Monday.
I am not asking about the probability that the situation as a whole will somewhere or other contain tails/Monday; this has a probability of 50%, just like the corresponding claim about heads/Monday. I am being asked in a concrete situation, “do you think it is Monday?” And I am less sure it is Monday if the coin is going to be tails, because in that situation I will not be able to distinguish my situation from Tuesday. And this is surely the case even when I am woken up both on Monday and Tuesday. It will just happen twice that I am less sure it is Monday.
And based on the above reasoning, being told that it is Monday does indeed lead me to expect that the coin will land heads, with a probability of 2⁄3.
You should not be more surprised in that situation. The more days there are, the more that the extra tails awakenings push down the probability of heads. With 500000 awakenings, the probability gets pushed down by a lot. Now heads is 1⁄500001 per-awakening probability, same as tails-day-1 and tails-day-500000
You are claiming that if I will be wake up 500,000 times if the coin lands tails, I should be virtually certain a priori that the coin will land tails. I am not; I would not be surprised at all if it landed heads. In fact, as I have been saying, the setup does not make me expect tails in any way. So at the start the probability remains 50% heads, 50% tails.
Yes, I am (assuming you mean per-awakening certainty).
I do not. I mean reporting my opinion when someone asks, “Do you think the coin landed, heads, or tails?” I will truthfully respond that I have no idea. The fact that I would be woken up multiple times if it landed tails, did not make it any harder for the coin to land heads.
I’d recommend distinguishing between the probability that the coin landed heads (which happens exactly once), and the probability that, if you were to plan to peak you would see heads (which would happen on average 250,000 times).
The problem is that you are counting frequencies, and I am not. It is true that if you run the experiment many times, my estimate will change, from the very moment that I know that the experiment will be run many times.
But if we are going to run the experiment only once, then even if I plan to peek, I would expect with 50% probability to see heads. That does not mean “per awakening” or any other method of counting. It means that if I saw heads, I would say, “Not surprising; that had a 50% chance of happening.” I would not say, “What an incredible coincidence!!!!”
Thank you for walking me through this; I’m having a very hard time seeing the other perspective here.
I understand that P(Monday) is ambiguous. I meant to refer to “the probability that the current day, as Beauty is currently being interviewed, is Monday”. Regardless of Beauty’s perspective, she can ask weather the current day is Monday or Tuesday, and she does know that it is not currently both. And she can ask what the probability that the coin landed tails given that the current day is Tuesday, etc. Yes? Given that, I’m not seeing what part of my reasoning doesn’t work if you replace each instance of “Monday” with “IsCurrentlyMonday”.
What you just described is a per-awakening probability. Per-awakening, P(Heads) = 1⁄3, so the proof that P(Heads | Monday) > 1⁄2 actually only proves that P(Heads | Monday) > 1⁄3, which is true since 1⁄2 > 1⁄3.
Sorry, you lost me completely. I didn’t prove that P(Heads | Monday) > 1⁄2 at all.
Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?
The experiment is unchanged by delaying the coin flip until Monday evening.
If the current day is Monday, then the coin is equally likely to land heads or tails, because it is a fair coin that is about to be flipped. Thus P(Heads | CurrentlyMonday) = 1⁄2.
By Bayes’ theorem, which is applicable because it cannot currently be both Monday and Tuesday:
P(Heads) = P(CurrentlyMonday) P(Heads | CurrentlyMonday) + P(CurrentlyTuesday) P(Heads | CurrentlyTuesday)
P(Heads | CurrentlyTuesday) = 0, because if it is Tuesday then the coin must have landed tails.
Thus P(CurrentlyMonday) = 2 * P(Heads) by some algebra.
It may not currently be Monday, thus P(CurrentlyMonday) != 1, thus P(Heads) < 1⁄2.
You had said:
Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated.
“CurrentlyMonday” as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1⁄3, which indeed is less than 1⁄2.
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
(If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like “the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday”, and step 3 won’t work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
Sorry, I wrote that without thinking much. I’ve seen that position, but it’s definitely not the standard halfer position. (It seems to be entirelyuseless’ position, if I’m not mistaken.)
The per-experiment probabilities you give make perfect sense to me: they’re the probabilities you have before you condition on the fact that you’re Beauty in an interview, and they’re the probabilities from which I derived the “per-awakening” probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1⁄2; thus they’re all equally likely, though that’s not the most rigorous reasoning).
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different?
Thanks again for the discussion.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
1) Given many experiments, if you selected one of those experiments at random, in what percentage of those experiments did the coin land heads?
2) Given many awakenings, if you selected one of those awakenings at random, in what percentage of those awakenings did the coin land heads?
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
Ok, yes, agreed.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.
This is a SIA idea, and it’s wrong. There’s nothing to condition on because there’s no new information, just as there’s no new information when you find that you exist. You can never find yourself in a position where you don’t exist or where you’re not awake (assuming awake here is the same as being conscious.)
Please don’t make statements like this unless you really understand the other person’s position (can you guess how I will respond?). For instance, notice that I haven’t ever said that the halfer position is wrong.
This is just a restatement of SSA. By SIA there is new information, since you’re more likely to be one of a larger set of people.
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I’m curious how you’d respond to the question I asked above. It’s in the post with “So your probabilities aren’t grounded in frequency&utility.”
And I didn’t say (or even mean to say) that your position is wrong. I said the SIA idea is wrong.
You can learn something from the fact that you are alive, as in cases like this. But you don’t learn anything from it in the cases where the disagreement between SSA and SIA comes up. I’ll say more about this in replying to the other comments, but for the moment, consider this thought experiment:
Suppose that you wake up tomorrow in your friend Tom’s body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day.
Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don’t even have a way to know it isn’t happening. The world would seem the same to everyone, including to you and him, if it were the case.
So consider another situation: you don’t wake up tomorrow at all. Someone else wakes up in your place with your memories and personality.
Once again, this situation is either meaningless, or no one, including you, has a way to know it didn’t already happen yesterday.
So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened.
The application of this to SSA and SIA should be evident.