Generally I think the 1⁄3 argument is more appealing, just based on two principles:
Credences should follow the axioms of a probability space (including conditionality);
The conditional credence for heads given that it is Monday, is 1⁄2.
That immediately gives P(Heads & Monday) = P(Tails & Monday). The only way this can be compatible with P(Heads) = 1⁄2 is if P(Tails & Tuesday) = 0, and I don’t think anybody supports that!
P(Tails & Tuesday) = P(Tails & Monday) isn’t strictly required by these principles, but it certainly seems a highly reasonable assumption and yields P(Heads) = 1⁄3.
I don’t think anybody disagrees with principle (2).
Principle (1) is somewhat more dubious though. Since all credences are conditional on epistemic state, and this experiment directly manipulates epistemic state (via amnesia), it is arguable that “rational” conditional credences might not necessarily obey probability space rules.
They … what? I’ve never read anything suggesting that. Do you have any links or even a memory of an argument that you may have seen from such a person?
Edit: Just to clarify, conditional credence P(X|Y) is of the form “if I knew Y held, then my credence for X would be …”. Are you saying that lots of people believe that if they knew it was Monday, then they would hold something other than equal credence for heads and tails?
Good point! Lewis’ notation P_+(HEADS) does indeed refer to the conditional credence upon learning that it’s Monday, and he sets it to 2⁄3 by reasoning backward from P(HEADS) = 1⁄2 and using my (1).
So yes, there are indeed people who believe that if Beauty is told that it’s Monday, then she should update to believing that the coin was more likely heads than not. Which seems weird to me—I have a great deal more suspicion that (1) is unjustifiable than that (2) is.
If you half and don’t think that your credence should be 2⁄3 in heads after finding out it’s Monday you violate the conservation of evidence. If you’re going to be told what time it is, your credence might go up but has no chance of going down—if it’s day 2 your credence will spike to 100, if it’s day 1 it wont’ change.
Is conversation of expected evidence a reasonably maintainable proposition across epistemically hazardous situations such as memory wipes (or false memories, self-duplicates and so on)? Arguably, in such situation it is impossible to be perfectly rational since the thing you do your reasoning with is being externally manipulated.
I’m a bit surprised that you think this way, considering that you’ve basically solved the problem yourself in this comment.
P(Heads & Monday) = P(Tails & Monday) = 1⁄2
P(Tails & Monday) = P(Tails&Tuesday) = 1⁄2
Because Tails&Monday and Tails&Tuesday are the exact same event.
The mistake that everyone seem to be making is thinking that Monday/Tuesday mean “This awakening is happening during Monday/Tuesday”. But such events are ill-defined in the Sleeping Beauty setting. On Tails both Monday and Tuesday awakenings are supposed to happen in the same iteration of probability experiment and the Beauty is fully aware of that, so she can’t treat them as individual mutual exclusive outcomes.
You can only lawfully talk about “In this iteration of probability experiment Monday/Tuesday awakening happens”.
Generally I think the 1⁄3 argument is more appealing, just based on two principles:
Credences should follow the axioms of a probability space (including conditionality);
The conditional credence for heads given that it is Monday, is 1⁄2.
That immediately gives P(Heads & Monday) = P(Tails & Monday). The only way this can be compatible with P(Heads) = 1⁄2 is if P(Tails & Tuesday) = 0, and I don’t think anybody supports that!
P(Tails & Tuesday) = P(Tails & Monday) isn’t strictly required by these principles, but it certainly seems a highly reasonable assumption and yields P(Heads) = 1⁄3.
I don’t think anybody disagrees with principle (2).
Principle (1) is somewhat more dubious though. Since all credences are conditional on epistemic state, and this experiment directly manipulates epistemic state (via amnesia), it is arguable that “rational” conditional credences might not necessarily obey probability space rules.
Lots of people disagree with 2.
They … what? I’ve never read anything suggesting that. Do you have any links or even a memory of an argument that you may have seen from such a person?
Edit: Just to clarify, conditional credence P(X|Y) is of the form “if I knew Y held, then my credence for X would be …”. Are you saying that lots of people believe that if they knew it was Monday, then they would hold something other than equal credence for heads and tails?
Yes—Lewis held this, for instance, in the most famous paper on the topic.
Good point! Lewis’ notation P_+(HEADS) does indeed refer to the conditional credence upon learning that it’s Monday, and he sets it to 2⁄3 by reasoning backward from P(HEADS) = 1⁄2 and using my (1).
So yes, there are indeed people who believe that if Beauty is told that it’s Monday, then she should update to believing that the coin was more likely heads than not. Which seems weird to me—I have a great deal more suspicion that (1) is unjustifiable than that (2) is.
If you half and don’t think that your credence should be 2⁄3 in heads after finding out it’s Monday you violate the conservation of evidence. If you’re going to be told what time it is, your credence might go up but has no chance of going down—if it’s day 2 your credence will spike to 100, if it’s day 1 it wont’ change.
Is conversation of expected evidence a reasonably maintainable proposition across epistemically hazardous situations such as memory wipes (or false memories, self-duplicates and so on)? Arguably, in such situation it is impossible to be perfectly rational since the thing you do your reasoning with is being externally manipulated.
You would violate conservation of expected evidence if
P(Monday) + P(Tuesday) = 1
However this is not the case because P(Monday) = 1 and P(Tuesday) = 1⁄2
I’m a bit surprised that you think this way, considering that you’ve basically solved the problem yourself in this comment.
P(Heads & Monday) = P(Tails & Monday) = 1⁄2
P(Tails & Monday) = P(Tails&Tuesday) = 1⁄2
Because Tails&Monday and Tails&Tuesday are the exact same event.
The mistake that everyone seem to be making is thinking that Monday/Tuesday mean “This awakening is happening during Monday/Tuesday”. But such events are ill-defined in the Sleeping Beauty setting. On Tails both Monday and Tuesday awakenings are supposed to happen in the same iteration of probability experiment and the Beauty is fully aware of that, so she can’t treat them as individual mutual exclusive outcomes.
You can only lawfully talk about “In this iteration of probability experiment Monday/Tuesday awakening happens”.
In this post I explain it in more details.