Sorry, you lost me completely. I didn’t prove that P(Heads | Monday) > 1⁄2 at all.
You had said:
This is in contrast to the standard halfer position, where P(Heads | Monday) > 1⁄2
Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated.
Could you say which step (1-6) is wrong, if I am Beauty, and I wake up, and I reason as follows?
“CurrentlyMonday” as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1⁄3, which indeed is less than 1⁄2.
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
(If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like “the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday”, and step 3 won’t work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)
“CurrentlyMonday” as you have defined it is a per-awakening probability
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
Neither of your links to the halfer position shows anyone claiming that.
Sorry, I wrote that without thinking much. I’ve seen that position, but it’s definitely not the standard halfer position. (It seems to be entirelyuseless’ position, if I’m not mistaken.)
The per-experiment probabilities you give make perfect sense to me: they’re the probabilities you have before you condition on the fact that you’re Beauty in an interview, and they’re the probabilities from which I derived the “per-awakening” probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1⁄2; thus they’re all equally likely, though that’s not the most rigorous reasoning).
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different?
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
P(X) = P(X | A) * P(A) + P(X | B) * P(B)
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event.
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
My understanding is that P depends only on your knowledge and priors.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up.
This is a SIA idea, and it’s wrong. There’s nothing to condition on because there’s no new information, just as there’s no new information when you find that you exist. You can never find yourself in a position where you don’t exist or where you’re not awake (assuming awake here is the same as being conscious.)
Please don’t make statements like this unless you really understand the other person’s position (can you guess how I will respond?). For instance, notice that I haven’t ever said that the halfer position is wrong.
There’s nothing to condition on because there’s no new information
This is just a restatement of SSA. By SIA there is new information, since you’re more likely to be one of a larger set of people.
just as there’s no new information when you find that you exist
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I’m curious how you’d respond to the question I asked above. It’s in the post with “So your probabilities aren’t grounded in frequency&utility.”
For instance, notice that I haven’t ever said that the halfer position is wrong.
And I didn’t say (or even mean to say) that your position is wrong. I said the SIA idea is wrong.
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
You can learn something from the fact that you are alive, as in cases like this. But you don’t learn anything from it in the cases where the disagreement between SSA and SIA comes up. I’ll say more about this in replying to the other comments, but for the moment, consider this thought experiment:
Suppose that you wake up tomorrow in your friend Tom’s body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day.
Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don’t even have a way to know it isn’t happening. The world would seem the same to everyone, including to you and him, if it were the case.
So consider another situation: you don’t wake up tomorrow at all. Someone else wakes up in your place with your memories and personality.
Once again, this situation is either meaningless, or no one, including you, has a way to know it didn’t already happen yesterday.
So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened.
The application of this to SSA and SIA should be evident.
You had said:
Neither of your links to the halfer position shows anyone claiming that. So I assumed you tried to deduce it from the halfer position. The obvious way to deduce it is wrong for the reason I stated.
“CurrentlyMonday” as you have defined it is a per-awakening probability, not a per-experiment probability. So the P(Heads) that you end up computing by those steps is a per-awakening P(Heads). Per-awakening, P(Heads) is 1⁄3, which indeed is less than 1⁄2.
The halfer position assumes that the probability that is meaningful is a per-experiment probability.
(If you want to compute a per-experiment probability, you would have to define CurrentlyMonday as something like “the probability that the experiment contains a bet where, at the moment of the bet, it is currently Monday”, and step 3 won’t work since CurrentlyMonday and CurrentlyTuesday are not exclusive.)
To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
Sorry, I wrote that without thinking much. I’ve seen that position, but it’s definitely not the standard halfer position. (It seems to be entirelyuseless’ position, if I’m not mistaken.)
The per-experiment probabilities you give make perfect sense to me: they’re the probabilities you have before you condition on the fact that you’re Beauty in an interview, and they’re the probabilities from which I derived the “per-awakening” probabilities myself (three indistinguishable scenarios: HM, TM, TT, each with probability 1⁄2; thus they’re all equally likely, though that’s not the most rigorous reasoning).
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up. If instead, on Heads you let Beauty live and on Tails you kill her, then no one would have trouble saying that Beauty should say P(Heads) = 1 in an interview. Why is this different?
Thanks again for the discussion.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
1) Given many experiments, if you selected one of those experiments at random, in what percentage of those experiments did the coin land heads?
2) Given many awakenings, if you selected one of those awakenings at random, in what percentage of those awakenings did the coin land heads?
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
Ok, yes, agreed.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.
This is a SIA idea, and it’s wrong. There’s nothing to condition on because there’s no new information, just as there’s no new information when you find that you exist. You can never find yourself in a position where you don’t exist or where you’re not awake (assuming awake here is the same as being conscious.)
Please don’t make statements like this unless you really understand the other person’s position (can you guess how I will respond?). For instance, notice that I haven’t ever said that the halfer position is wrong.
This is just a restatement of SSA. By SIA there is new information, since you’re more likely to be one of a larger set of people.
Sure there is! Flip a coin and kill Beauty on tails. Now ask her what the coin flip said: she learns from the fact that she’s alive that it landed heads.
I understand that SSA is a consistent position, and I understand that it matches your intuition if not mine. I’m curious how you’d respond to the question I asked above. It’s in the post with “So your probabilities aren’t grounded in frequency&utility.”
And I didn’t say (or even mean to say) that your position is wrong. I said the SIA idea is wrong.
You can learn something from the fact that you are alive, as in cases like this. But you don’t learn anything from it in the cases where the disagreement between SSA and SIA comes up. I’ll say more about this in replying to the other comments, but for the moment, consider this thought experiment:
Suppose that you wake up tomorrow in your friend Tom’s body and with his memories and personality. He wakes up tomorrow in yours in the same way. The following day, you swap back, and so it goes from day to day.
Notice that this situation is empirically indistinguishable from the real world. Either the situation is meaningless, or you don’t even have a way to know it isn’t happening. The world would seem the same to everyone, including to you and him, if it were the case.
So consider another situation: you don’t wake up tomorrow at all. Someone else wakes up in your place with your memories and personality.
Once again, this situation is either meaningless, or no one, including you, has a way to know it didn’t already happen yesterday.
So you can condition on the fact that you woke up this morning, rather than not waking up at all. We can conclude from this, for example, that the earth was not destroyed. But you cannot condition on the fact that you woke up this morning instead of someone else waking up in your place; since for all you know, that is exactly what happened.
The application of this to SSA and SIA should be evident.