One of the points I’m making in this post is that the question of Sleeping Beauty problem is very context sensitive: people go between antropical motte and anthropical bailey without even realising it.
You should explicitly specify whether by “degree of belief for the coin having come up heads” you mean in this experiment or in this awakening. As you can see
coin_guess = []
for n in range(100000):
days, coin = classic()
for d in days:
coin_guess.append(coin == 'Heads')
coin_guess.count(True)/len(coin_guess) # 0.3322852689217815
coin_guess = {}
for i in range(100000):
days, coin = classic()
for d in days:
coin_guess[i] = (coin == 'Heads')
coin_guess = list(coin_guess.values())
coin_guess.count(True)/len(coin_guess) # 0.50167
What answer is the correct one solely depends on how we count. And the whole controversy comes from the ambiguity, where people confuse probability that the coin is Heads with probability that the coin is Heads weighted by the number of awakenings you have.
You should also give link to the original paper with Double Halfer position authored by Mikaël Cozic.
As I show here
coin_guess = []
for n in range(100000):
days, coin = classic()
beauty_knows_monday = (days[0] == 'Monday')
if beauty_knows_monday :
coin_guess.append(coin == 'Heads')
print(coin_guess.count(True)/len(coin_guess)) # 0.49958
Halfer approach promoted by by Lewis is incorrect for the classical version of Sleeping Beauty. Double Halfer reasoning is correct when we are talking about probability and not weighted probability.
And the whole controversy comes from the ambiguity, where people confuse probability that the coin is Heads with probability that the coin is Heads weighted by the number of awakenings you have.
I don’t think this is confusion. Obviously no one thinks that any outsider’s probability should be different from 1⁄2, it is just that:
You should explicitly specify whether by “degree of belief for the coin having come up heads” you mean in this experiment or in this awakening.
Thirders think that “this awakening” is the correct way to define subjective probability, you think “this experiment” is the correct way to define subjective probability. It is a matter of definitions, and no confusion is necessarily involved.
Thanks, I’m reading the “Imaging and Sleeping Beauty” paper now, I’ll add it to Manifold shortly.
Like Simon, I think the best interpretation of the Sleeping Beauty problem is that it’s asking about the probability “in the awakening”, and there seems to be consensus that the probability “in the experiment” is 1⁄2. But I plan to defer to expert consensus once it exists.
I don’t think there is consensus that this “in the awakening” probability is 1⁄3. It looks like Bostrom (2006) invokes SSA to say that in a one-shot Sleeping Beauty experiment the probability is 1⁄2. And Milano (2022) thinks it depends on priors, so that a solipsistic prior gives probability 1⁄2.
I also don’t think this is just a matter of confusion. With respect to the motte and bailey you describe, it looks to me like many thirders hold the bailey position, both in “classic” and “incubator” versions of the problem. So if you claim that the bailey position is wrong, then there is a real dispute in play.
there seems to be consensus that the probability “in the experiment” is 1⁄2
I also don’t think this is just a matter of confusion. With respect to the motte and bailey you describe, it looks to me like many thirders hold the bailey position, both in “classic” and “incubator” versions of the problem.
Well, you see this is preciesly the confusion I’m talking about.
If there are thirders who hold the bailey position: that particiopating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments, then there can’t be a consensus that probability “in the experiment” is 1⁄2.
The whole “paradox” is that despite the fact that any random awakening are 2⁄3 likely to happen when the coin landed Tails, the fact that you are awaken doesn’t help you guess the outcome of the coin toss in this experiment better than chance. So it’s very important to be very precise about what you mean by “credence” or “probability”.
So if you claim that the bailey position is wrong, then there is a real dispute in play.
Yep. This is specifically the dispute I want to adress but to be able to do it one has to properly separate the bailey from the motte first. Next post will explore the bailey position in more details and show how it violates conservation of expected evidence.
I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
I agree with you that this is false, but it reads to me as a different position.
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability.
The bailey position: that participating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
Could you explain what is the difference you see between these two position?
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this:
The Bailey is the claim that the coin is actually more likely to be Tails when I participate in the experiment myself. That is, my awakening on Monday or Tuesday gives me evidence that lawfully update me to thinking that the coin landed Tails with 2⁄3 probability in this experiment, not just on average awakening.
One of the points I’m making in this post is that the question of Sleeping Beauty problem is very context sensitive: people go between antropical motte and anthropical bailey without even realising it.
You should explicitly specify whether by “degree of belief for the coin having come up heads” you mean in this experiment or in this awakening. As you can see
What answer is the correct one solely depends on how we count. And the whole controversy comes from the ambiguity, where people confuse probability that the coin is Heads with probability that the coin is Heads weighted by the number of awakenings you have.
You should also give link to the original paper with Double Halfer position authored by Mikaël Cozic.
As I show here
Halfer approach promoted by by Lewis is incorrect for the classical version of Sleeping Beauty. Double Halfer reasoning is correct when we are talking about probability and not weighted probability.
I don’t think this is confusion. Obviously no one thinks that any outsider’s probability should be different from 1⁄2, it is just that:
Thirders think that “this awakening” is the correct way to define subjective probability, you think “this experiment” is the correct way to define subjective probability. It is a matter of definitions, and no confusion is necessarily involved.
Thanks, I’m reading the “Imaging and Sleeping Beauty” paper now, I’ll add it to Manifold shortly.
Like Simon, I think the best interpretation of the Sleeping Beauty problem is that it’s asking about the probability “in the awakening”, and there seems to be consensus that the probability “in the experiment” is 1⁄2. But I plan to defer to expert consensus once it exists.
I don’t think there is consensus that this “in the awakening” probability is 1⁄3. It looks like Bostrom (2006) invokes SSA to say that in a one-shot Sleeping Beauty experiment the probability is 1⁄2. And Milano (2022) thinks it depends on priors, so that a solipsistic prior gives probability 1⁄2.
I also don’t think this is just a matter of confusion. With respect to the motte and bailey you describe, it looks to me like many thirders hold the bailey position, both in “classic” and “incubator” versions of the problem. So if you claim that the bailey position is wrong, then there is a real dispute in play.
Well, you see this is preciesly the confusion I’m talking about.
If there are thirders who hold the bailey position: that particiopating in the experiment gives them the ability to correctly guess tails in 2⁄3 of the experiments, then there can’t be a consensus that probability “in the experiment” is 1⁄2.
The whole “paradox” is that despite the fact that any random awakening are 2⁄3 likely to happen when the coin landed Tails, the fact that you are awaken doesn’t help you guess the outcome of the coin toss in this experiment better than chance. So it’s very important to be very precise about what you mean by “credence” or “probability”.
Yep. This is specifically the dispute I want to adress but to be able to do it one has to properly separate the bailey from the motte first. Next post will explore the bailey position in more details and show how it violates conservation of expected evidence.
I’m now unclear exactly what the bailey position is from your perspective. You said in the opening post, regarding the classic Sleeping Beauty problem:
From the perspective of the Bayesian Beauty paper, the thirder position is that, given the classic (non-incubator) Sleeping Beauty experiment, with these anthropic priors:
P(Monday | Heads) = 1⁄2
P(Monday | Tails) = 1⁄2
P(Heads) = 1⁄2
Then the following is true:
P(Heads | Awake) = 1⁄3
I think this follows from the given assumptions and priors. Do you agree?
One conversion of this into words is that my awakening (Awake=True) gives me evidence that lawfully updates me from, on Sunday, thinking that the coin will equally land either way (P(Heads) = 1⁄2) to waking up and thinking that the coin right now is more likely to be showing tails (P(Heads | Awake) = 1⁄3). Do you disagree with the conversion of the math into words? Would you perhaps phrase it differently?
Whereas now you define the bailey position as:
I agree with you that this is false, but it reads to me as a different position.
Could you explain what is the difference you see between these two position?
If you receive some evidence that lawfully updates you to believing that the coin is Tails with 2⁄3 probability in this experiment, in 2 out of 3 experiments the coin have to be Tails when you receive this evidence.
If you receive this evidence every experiment you participate in, then the coin have to be Tails 2 out of 3 times when you participate in the experiment and thus you have to be able to correctly guess Tails in 2 out of 3 of experiments.
There is a fundamental issue with trying to apply formal probability theory to the classic Sleeping Beauty, because the setting doesn’t satisfy the assumptions of Kolmogorov axioms. P(Monday) and P(Tuesday) are poorly defined and are not actually two elementary outcomes necessary for solution space because Tuesday follows Monday. Likewise, P(Heads&Monday), P(Tails&Monday) and P(Tails&Tuesday) are poorly defined and are not three elementary outcomes for the similar reason.
I’ll give a deeper read to the Bayesian Beauty paper, but from what I’ve already seen it just keeps uplying the same mathematical apparatus to the setting that it’s not properly fitting.
In the second one you specifically describe “the ability to correctly guess tails in 2⁄3 of the experiments”, whereas in the first you more loosely describe “thinking that the coin landed Tails with 2⁄3 probability”, which I previously read as being a probability per-awakening rather than per-coin-flip.
Would it be less misleading if I change the first phrase like this: