Suppose sleeping beauty secretly brings a coin into the experiment and flips when she wakes up. There are now six possible combinations of heads and tails, each with their own possibilities:
HH: 1⁄4
HT: 1⁄4
THH: 1⁄8
THT: 1⁄8
TTH: 1⁄8
TTT: 1⁄8
When she wakes up and flips the coin, she notices it lands on heads. This eliminates two of the possibilites. Now renormalizing their values:
HH: 2⁄5
HT: 0
THH: 1⁄5
THT: 1⁄5
TTH: 1⁄5
TTT: 0
She can conclude that the coin landed on tails with 60% probability, rather than the normal 50% probability. She could flip the coins more times. Doing so, she will asymptotically approach 2⁄3 probability that it landed on tails.
Perhaps she gets caught with the coin, and has it taken away. This isn’t a problem. She can just look at dust specks, or any other thing she can’t predict and won’t be consistent. For all intents and purposes, she’s using SSA. There’s a difference if she’s woken so many times that it’s likely she’ll make exactly the same observations more than once, but that takes her being woken order of 10^million times.
In the THT case, on Monday she flips heads. Thus, if she flips heads, and has no way of knowing whether or not it’s monday, she can’t eliminate the possibility of THT.
I think this is mistaken in that eliminating the HT and TTT possibilities isn’t the only update SB can make on seeing heads. Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.
HOWEVER. The above logic relies on the assumption that this is a coherent and useful way to consider probabilities in this kind of anthropic problem, and that’s not an assumption I accept. So take with a grain of salt.
I think this is mistaken in that eliminating the HT and TTT possibilities isn’t the only update SB can make on seeing heads.
It’s the Sherlock Holmes Axiom that the original post was suggesting we use.
Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.
This would be SB deciding that she is randomly selected from the reference class of SBs. In other words, it’s SSA, only with a much smaller reference class than I’d suggest using.
If she uses a larger reference class, she’d realize that she’s about twice as likely to wake up in a room during the experiment if the coin landed on tails, and would conclude that there’s a nearly 2⁄3 probability of the coin landing on tails.
I haven’t figured out how to verbalize this properly yet, but it feels to me like the “THT” and “TTH” entries are problematic — it seems like she should only be able to count one of those options, not both. When you remove one of them, then the first coin has equal probability of coming up heads and tails as we’d expect.
Suppose sleeping beauty secretly brings a coin into the experiment and flips when she wakes up. There are now six possible combinations of heads and tails, each with their own possibilities:
HH: 1⁄4
HT: 1⁄4
THH: 1⁄8
THT: 1⁄8
TTH: 1⁄8
TTT: 1⁄8
When she wakes up and flips the coin, she notices it lands on heads. This eliminates two of the possibilites. Now renormalizing their values:
HH: 2⁄5
HT: 0
THH: 1⁄5
THT: 1⁄5
TTH: 1⁄5
TTT: 0
She can conclude that the coin landed on tails with 60% probability, rather than the normal 50% probability. She could flip the coins more times. Doing so, she will asymptotically approach 2⁄3 probability that it landed on tails.
Perhaps she gets caught with the coin, and has it taken away. This isn’t a problem. She can just look at dust specks, or any other thing she can’t predict and won’t be consistent. For all intents and purposes, she’s using SSA. There’s a difference if she’s woken so many times that it’s likely she’ll make exactly the same observations more than once, but that takes her being woken order of 10^million times.
This is very interesting, but I haven’t quite grokked it yet. Thank you for what might be a fatal flaw. Upvoted while I think about it.
That doesn’t look right—if she just flipped H, then THT is also eliminated. So the renormalization should be:
HH: 1⁄2
HT: 0
THH: 1⁄4
THT: 0
TTH: 1⁄4
TTT: 0
Which means the coin doesn’t actually change anything.
In the THT case, on Monday she flips heads. Thus, if she flips heads, and has no way of knowing whether or not it’s monday, she can’t eliminate the possibility of THT.
I think this is mistaken in that eliminating the HT and TTT possibilities isn’t the only update SB can make on seeing heads. Conditioning on a particular sequence of flips, an observation of heads is certain under the HH or THH sequences, but only 50% likely under the THT or TTH sequences, so SB should adjust probabilities accordingly and consequently end up with no new information about the initial flip.
HOWEVER. The above logic relies on the assumption that this is a coherent and useful way to consider probabilities in this kind of anthropic problem, and that’s not an assumption I accept. So take with a grain of salt.
It’s the Sherlock Holmes Axiom that the original post was suggesting we use.
This would be SB deciding that she is randomly selected from the reference class of SBs. In other words, it’s SSA, only with a much smaller reference class than I’d suggest using.
If she uses a larger reference class, she’d realize that she’s about twice as likely to wake up in a room during the experiment if the coin landed on tails, and would conclude that there’s a nearly 2⁄3 probability of the coin landing on tails.
I haven’t figured out how to verbalize this properly yet, but it feels to me like the “THT” and “TTH” entries are problematic — it seems like she should only be able to count one of those options, not both. When you remove one of them, then the first coin has equal probability of coming up heads and tails as we’d expect.