To be clear, you’re saying that, from a halfer position, “the probability that, when Beauty wakes up, it is currently Monday” is meaningless?
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
I’m confused why anyone would want not to condition on the fact that Beauty is awake when the problem states that she’s interviewed each time she wakes up.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
P(X) = P(X | A) * P(A) + P(X | B) * P(B)
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event.
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
My understanding is that P depends only on your knowledge and priors.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.
It’s meaningless in the sense that it doesn’t have a meaning that matches what you’re trying to use it for. Not that it literally has no meaning.
It depends on what you’re trying to measure.
If you’re trying to measure what percentage of experiments have heads, you need to use a per-experiment probability. It isn’t obviously implausible that someone might want to measure what percentage of experiments have heads.
What I’m trying to use it for is to compute P(Heads), from a halfer position, while carrying out my argument.
So in other words, P(per-experiment-heads | it-is-currently-Monday) is meaningless? And a halfer, who interpreted P(heads) to mean P(per-experiment-heads), would say that P(heads | it-is-currently-Monday) is meaningless?
The “per-experiment” part is a description of, among other things, how we are calculating the probability.
In other words, when you say “P(per-experiment event)” the “per-experiment” is really describing the P, not just the event. So if you say “P(per-experiment event|per-awakening event)” that really is meaningless; you’re giving two contradictory descriptions to the same P.
THANK YOU. I now see that there are two sides of the coin.
However, I feel like it’s actually Heads, and not P, that is ambiguous. There is the probability that the coin would land heads. The coin lands exactly once per experiment, and half the time it will land heads. If you count Beauty’s answer to the question “what is the probability that the coin landed heads” once per awakening, you’re sometimes double-counting her answer (on Tails). It’s dishonest to ask her twice about an event that only happened once.
On the other hand, there is the probability that if Beauty were to peek, she would see heads. If she decided to peek, then she would see the coin once or twice. Under SIA, she’s twice as likely to see tails. If you count Beauty’s answer to the question “what is the probability that the coin is currently showing heads” once per experiment, you’re sometimes ignoring her answer (on Tuesdays). It would be dishonest to only count one of her two answers to two distinct questions.
(Being more precise: suppose the coin lands tails, and you ask Beauty “What is the probability that the coin is currently showing heads?” on each day, but only count her answer on Monday. Well, you’ve asked her two distinct questions, because the meaning of “currently” changes between the two days, but only counted one of them. It’s dishonest.)
Thus, this question isn’t up for interpretation. The answer is 1⁄2, because the question (on Wikipedia, at least) asks about the probability that the coin landed heads. There are two interpretations—per experiment and per awakening—but the interpretation should be set by the question. Likewise, setting a bet doesn’t help settle which interpretation to use: either interpretation is perfectly capable of figuring out how to maximize expectation for any bet; it just might consider some bets to be rigged.
Although this is subtle, and maybe I’m still missing things. For one, why is Baye’s rule failing? I now know how to use it both to prove that P(Heads) < 1⁄2 and to prove that P(Heads) = 1⁄2, by marginalizing on either CurrentlyMonday/CurrentlyTuesday or on WillWakeUpOnTuesday/WontWakeUpOnTuesday. When you use
you need that A and B are mutually exclusive. But this seems to be suggesting that there’s some other subtle requirement as well that somehow depends on what X is.
It could be, as you say, that P is different. But P should only depend on your knowledge and priors. All the priors are fixed here (it’s a fair coin, use SIA), so what are the two sets of knowledge?
That doesn’t help. “Coin landed heads” can still be used to describe either a per-experiment or per-awakening situation:
1) Given many experiments, if you selected one of those experiments at random, in what percentage of those experiments did the coin land heads?
2) Given many awakenings, if you selected one of those awakenings at random, in what percentage of those awakenings did the coin land heads?
My understanding is that P depends only on your knowledge and priors. If so, what is the knowledge that differs between per-experiment and per-awakening? Or am I wrong about that?
Ok, yes, agreed.
A per-experiment P means that P would approach the number you get when you divide the number of successes in a series of experiments by the number of experiments. Likewise for a per-awakening event. You could phrase this as “different knowledge” if you wish, since you know things about experiments that are not true of awakenings and vice versa.