She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%.
I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice’s coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn’t give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2⁄3 about Alice’s coin. So if the Alice he’s meeting is a halfer, they have an unresolvable disagreement about her coin.
This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment!
Yeah, also known as “algorithm for winning the lottery”: precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.
Think about it in terms of what happens in the experiment as a whole—that’s what halfing is about. If the coin is Heads Alice always meets Bob, either on Monday or on Tuesday, depending on when Bob is awake. If the coin is Tails Alice can meet Bob only on Monday, which happens only 50% of time. Run the experiment multiple times and there will be iterations of it where Alice and Bob do not meet and all of them happen when the coin is Tails.
That’s how halfer Alice thinks:
I’m awake. But this doesn’t give me any new information because I was expecting to be awake in this experiment with 100%, regardless of the outcome of the coin toss:
This is the same update that Bob makes when he meets Alice. On the other hand, Thirder Alice belives that probability for Heads is 2⁄3 even before she meets Bob. Which makes her overconfident in the wrong answer when the meeting doesn’t happen and the coin is Tails.
Likewise, you can remove Bob from the problem alltogether and just make it so Alice has only 50% chance to be awake on Monday, and if it didn’t happen and the coin is Heads she will be Awake on Tuesday. Here awakening is new evidence because it doesn’t happen in every experiment.
It seems to me that the correct Bayesian updating is a bit different.
Let’s denote Alice and Bob’s coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1⁄4, P(A=H∧D=2) = 1⁄4, P(A=T∧D=1) = 1⁄2, and independently P(B=H) = P(B=T) = 1⁄2.
After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice’s prior above: P(A=H∧D=1∧B=H) = 1⁄4 * 1⁄2 = 1⁄8, P(A=H∧D=2∧B=T) = 1⁄4 * 1⁄2 = 1⁄8, P(A=T∧D=1∧B=H) = 1⁄2 * 1⁄2 = 1⁄4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn’t make Halfer Alice update about her coin.
You are describing Lewisian Halfer’s model. It indeed produces incorrect results for Sleeping Beauty, I’m currently working on a post that is going to dive deeper into it, but for now, suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong. It also fails at betting.
Correct halfer position for Sleeping Beauty, also known as double halfing, claims that both P(Heads|Awake) = 1⁄2 and P(Heads|Monday) = 1⁄2. Once again, I’m planning to have a deep dive into how it is possible and why it’s the right answer in a separate post. For now you can just see that it’s about probabilities averaged per iteration of experiment, not per awakening and that my comment is talking about how a Beauty, who is this kind of halfer, is supposed to reason.
suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong
That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1⁄3 and P(A=T|D=1) = 2⁄3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence—sorry!)
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes’ theorem.
Think about it. In Sleeping Beauty, Monday awakening always happens. So the coin toss can be made already after this awakening. If 3⁄2 had been a correct estimate, the Beauty would have been able to predict the future coin toss better than chance. Which would’ve been… quite peculiar, to say the least. Of course one can also just run a simulation of the experiment and check.
Lewisian Halfers, however, are right for the Fissure mind experiment, where a person is either left in Room1 or splited in two people, one of which goes to Room1 and the second goes to Room2 - at random. In such experiment, you can’t be certain to be in Room1 and so being in Room1 is indeed more likely if there was no spliting. And if a visitor comes to a random room and you meet there you will indeed have disagreement about probabilities.
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)?
Answering your direct question: 1⁄2 for all three.
That’s because (A=H∧D=1) and (A=H∧D=2) are actually the same outcome. P(D=1) = 1; P(D=2) = 1⁄2.
Here (D=1) doesn’t mean “this awakening is happening in the first day of the experiment” but rather “awakening in the first day of the experiment has happened”. Likewise with (D=2). (D=1) and (D=2) are not exclusonary here, but intersecting. If (D=2) is true then (D=1) is also always true. And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
If it gave you more questions than answers—I’m hopefully going to answer them all when I finish my next couple of posts, where I attempt to rigorously justify all of it. For now you can just notice that double halfer position doesn’t have problems with betting, doesn’t updates out of nowhere and doesn’t allow you to arbitrary manipulate your probabilities by precommitments.
And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
Yeah, I don’t know if “undefined” is a good answer.
To be fair, there are some decision-theoretic situations where “undefined” is a good answer. For example, let’s say Alice wakes up with amnesia on 10 consecutive days, and each day she’s presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can’t use that knowledge to say “oh I’ll choose B then”, because that’ll change the probabilities again.
But the Sleeping Beauty problem is not like that. Alice doesn’t make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what’s in the envelope. Undefined doesn’t cut it for me yet. Maybe I should just wait for your post :-)
Why “this awakening is happening during Monday” isn’t a valid event indeed requires some careful justifications and I wasn’t planning to go into more details in this comment section. But the example of an undefined event you brough up is actually very helpful to get the right intuition. Because yes, despite the fact that it’s less obvious, Sleeping Beauty problem is very much like this.
Lets look again at the envelope experiment. There are two outcomes for money placement: ABBBBBBBBB and BAAAAAAAAA. If you run the experiment and write down where the money were at each awakening you will always notice these long lines of As or Bs. If we do the same in a different experimental setting, where for every awakening there is some non-zero chance that money are either in envelope A or in envelope B, we won’t be able to always be sure to observe this streak of the same values.
There is the exact same behaviour with Alice and Bob’s awakenings. Run the experiment multiple times and write down which day the awakenings happen. Bob will have his Monday and Tuesday awakenings going in random order, while Alice will always have her Tuesday awakening predecessed by Monday awakening. Compare experiments with more awakenings, for example where Alice wakes up on every day of the week on Heads and Bob wakes up on a random day of the week, and it will be more obvious.
Just like when Alice picks envelope A on day one money will always be placed in the envelope B on day two, When Alice wakes up on Heads&Monday her next awakening will always happen on Heads&Tuesday. In both cases, previous awakening affects the future awakening.
Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn’t seem to shed light why they should be undefined. Am I missing something?
Hmm. But in the envelope experiment, once Alice commits to a decision (to e.g. choose A), the probabilities are well-defined. So in Sleeping Beauty, if the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go.
Do you mean that conditional probabilities should be well defined? They indeed are.
P(Heads|Monday) = 1⁄2; P(Heads|Tuesday)=1. But as P(Monday) and P(Tuesday) are not defined you can’t use them to arrive to P(Heads&Monday) and P(Heads&Tuesday) via Bayes theorem.
If you say things like “P(X|Y) is defined but P(Y) isn’t”, doesn’t that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say “let’s kick out this leg from this table”, but the math tells me pretty insistently that the table can’t stand without that particular leg. Or at least, if there’s a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I’d want to see more details about that theory and how it doesn’t trip over itself. Does that make sense?
This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as “the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time”. Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this “multiplication rule” can be practically useful in computing the probability of
and introduces a symmetry with the summation axiom for Poincaré Formula
Not sure I understand. My question was, what kind of probability theory can support things like “P(X|Y) is defined but P(Y) isn’t”. The snippet you give doesn’t seem relevant to that, as it assumes both values are defined.
The kind of probability theory that defines P(X|Y) axiomatically as a primitive entity and only then defines P(X&Y) as a multiplication of P(X|Y) and P(Y), instead of defining conditional probability as a ratio between P(X&Y) and P(Y).
While it’s mathematically equivalent, the former method is more resembling the way people deal with probabilities in practice—usually conditional probability is known and probability of an intersection isn’t—and formally allows us to talk about conditional probabilities, even when the probability of an event we condition on is not defined.
I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice’s coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn’t give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2⁄3 about Alice’s coin. So if the Alice he’s meeting is a halfer, they have an unresolvable disagreement about her coin.
Yeah, also known as “algorithm for winning the lottery”: precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.
Think about it in terms of what happens in the experiment as a whole—that’s what halfing is about. If the coin is Heads Alice always meets Bob, either on Monday or on Tuesday, depending on when Bob is awake. If the coin is Tails Alice can meet Bob only on Monday, which happens only 50% of time. Run the experiment multiple times and there will be iterations of it where Alice and Bob do not meet and all of them happen when the coin is Tails.
That’s how halfer Alice thinks:
I’m awake. But this doesn’t give me any new information because I was expecting to be awake in this experiment with 100%, regardless of the outcome of the coin toss:
P(Heads|Awake)=P(Awake|Heads)P(Heads)/P(Awake)=1∗1/2∗1=1/2
Oh, hi Bob! Now this doesn’t happen every experiment. This is new evidence in favour of Heads, so I update:
P(Heads|MeetsBob)=P(MeetsBob|Heads)P(Heads)/P(MeetsBob)=1∗1/2∗4/3=2/3
This is the same update that Bob makes when he meets Alice. On the other hand, Thirder Alice belives that probability for Heads is 2⁄3 even before she meets Bob. Which makes her overconfident in the wrong answer when the meeting doesn’t happen and the coin is Tails.
Likewise, you can remove Bob from the problem alltogether and just make it so Alice has only 50% chance to be awake on Monday, and if it didn’t happen and the coin is Heads she will be Awake on Tuesday. Here awakening is new evidence because it doesn’t happen in every experiment.
It seems to me that the correct Bayesian updating is a bit different.
Let’s denote Alice and Bob’s coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1⁄4, P(A=H∧D=2) = 1⁄4, P(A=T∧D=1) = 1⁄2, and independently P(B=H) = P(B=T) = 1⁄2.
After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice’s prior above: P(A=H∧D=1∧B=H) = 1⁄4 * 1⁄2 = 1⁄8, P(A=H∧D=2∧B=T) = 1⁄4 * 1⁄2 = 1⁄8, P(A=T∧D=1∧B=H) = 1⁄2 * 1⁄2 = 1⁄4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn’t make Halfer Alice update about her coin.
You are describing Lewisian Halfer’s model. It indeed produces incorrect results for Sleeping Beauty, I’m currently working on a post that is going to dive deeper into it, but for now, suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong. It also fails at betting.
Correct halfer position for Sleeping Beauty, also known as double halfing, claims that both P(Heads|Awake) = 1⁄2 and P(Heads|Monday) = 1⁄2. Once again, I’m planning to have a deep dive into how it is possible and why it’s the right answer in a separate post. For now you can just see that it’s about probabilities averaged per iteration of experiment, not per awakening and that my comment is talking about how a Beauty, who is this kind of halfer, is supposed to reason.
That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1⁄3 and P(A=T|D=1) = 2⁄3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence—sorry!)
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes’ theorem.
Think about it. In Sleeping Beauty, Monday awakening always happens. So the coin toss can be made already after this awakening. If 3⁄2 had been a correct estimate, the Beauty would have been able to predict the future coin toss better than chance. Which would’ve been… quite peculiar, to say the least. Of course one can also just run a simulation of the experiment and check.
Lewisian Halfers, however, are right for the Fissure mind experiment, where a person is either left in Room1 or splited in two people, one of which goes to Room1 and the second goes to Room2 - at random. In such experiment, you can’t be certain to be in Room1 and so being in Room1 is indeed more likely if there was no spliting. And if a visitor comes to a random room and you meet there you will indeed have disagreement about probabilities.
Answering your direct question: 1⁄2 for all three.
That’s because (A=H∧D=1) and (A=H∧D=2) are actually the same outcome. P(D=1) = 1; P(D=2) = 1⁄2.
Here (D=1) doesn’t mean “this awakening is happening in the first day of the experiment” but rather “awakening in the first day of the experiment has happened”. Likewise with (D=2). (D=1) and (D=2) are not exclusonary here, but intersecting. If (D=2) is true then (D=1) is also always true.
And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
If it gave you more questions than answers—I’m hopefully going to answer them all when I finish my next couple of posts, where I attempt to rigorously justify all of it. For now you can just notice that double halfer position doesn’t have problems with betting, doesn’t updates out of nowhere and doesn’t allow you to arbitrary manipulate your probabilities by precommitments.
Yeah, I don’t know if “undefined” is a good answer.
To be fair, there are some decision-theoretic situations where “undefined” is a good answer. For example, let’s say Alice wakes up with amnesia on 10 consecutive days, and each day she’s presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can’t use that knowledge to say “oh I’ll choose B then”, because that’ll change the probabilities again.
But the Sleeping Beauty problem is not like that. Alice doesn’t make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what’s in the envelope. Undefined doesn’t cut it for me yet. Maybe I should just wait for your post :-)
Why “this awakening is happening during Monday” isn’t a valid event indeed requires some careful justifications and I wasn’t planning to go into more details in this comment section. But the example of an undefined event you brough up is actually very helpful to get the right intuition. Because yes, despite the fact that it’s less obvious, Sleeping Beauty problem is very much like this.
Lets look again at the envelope experiment. There are two outcomes for money placement: ABBBBBBBBB and BAAAAAAAAA. If you run the experiment and write down where the money were at each awakening you will always notice these long lines of As or Bs. If we do the same in a different experimental setting, where for every awakening there is some non-zero chance that money are either in envelope A or in envelope B, we won’t be able to always be sure to observe this streak of the same values.
There is the exact same behaviour with Alice and Bob’s awakenings. Run the experiment multiple times and write down which day the awakenings happen. Bob will have his Monday and Tuesday awakenings going in random order, while Alice will always have her Tuesday awakening predecessed by Monday awakening. Compare experiments with more awakenings, for example where Alice wakes up on every day of the week on Heads and Bob wakes up on a random day of the week, and it will be more obvious.
Just like when Alice picks envelope A on day one money will always be placed in the envelope B on day two, When Alice wakes up on Heads&Monday her next awakening will always happen on Heads&Tuesday. In both cases, previous awakening affects the future awakening.
Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn’t seem to shed light why they should be undefined. Am I missing something?
Do you mean that conditional probabilities should be well defined? They indeed are.
P(Heads|Monday) = 1⁄2; P(Heads|Tuesday)=1. But as P(Monday) and P(Tuesday) are not defined you can’t use them to arrive to P(Heads&Monday) and P(Heads&Tuesday) via Bayes theorem.
If you say things like “P(X|Y) is defined but P(Y) isn’t”, doesn’t that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say “let’s kick out this leg from this table”, but the math tells me pretty insistently that the table can’t stand without that particular leg. Or at least, if there’s a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I’d want to see more details about that theory and how it doesn’t trip over itself. Does that make sense?
Sure. But this has already been done and took much less trouble than you might have though. Citing Wikipedia on Conditional Probability:
Not sure I understand. My question was, what kind of probability theory can support things like “P(X|Y) is defined but P(Y) isn’t”. The snippet you give doesn’t seem relevant to that, as it assumes both values are defined.
The kind of probability theory that defines P(X|Y) axiomatically as a primitive entity and only then defines P(X&Y) as a multiplication of P(X|Y) and P(Y), instead of defining conditional probability as a ratio between P(X&Y) and P(Y).
While it’s mathematically equivalent, the former method is more resembling the way people deal with probabilities in practice—usually conditional probability is known and probability of an intersection isn’t—and formally allows us to talk about conditional probabilities, even when the probability of an event we condition on is not defined.