Computational biologist
Daniel Munro
It seems to me that Rule 1 is a direct translation of the Sleeping Beauty problem into a betting strategy question, while the other rules correspond to different questions where a single outcome depends on some function of the two guesses in the case of tails. Doing the experiment 100 times under that rule, Beauty will have around 150 identical awakening experiences. The payout for each correct guess is the same, $1, and the correct guess would be tails 2⁄3 of the time. So surely the probability that the coin had landed tails prior to these events is 2/3? Not because it’s an unfair coin or there was an information update (neither is true), but because the SB problem asks the probability from the perspective of someone being awakened, and 2⁄3 of these experiences happen after flipping tails. It seems a stretch to say the bet is 50⁄50 but the 2nd 50% happens twice as often.
Fascinating. But are these diagrams really showing HMMs? I thought each state in an HMM had a set of transition probabilities and another set of emission probabilities, which at each step are sampled independently. In these diagrams, the two processes are coupled. If “Even Ys” were a conventional HMM, SE would sometimes emit X and transition to SO, which would result in some even and some odd runs of Y. Are these a special variant of HMM, or some other type of state machine? And would these results apply to conventional HMMs with separate transition and emission probabilities?