OK, fair enough—I didn’t specify how she acquired that knowledge, and I wasn’t assuming a clever method. I was just considering a variant of the story (often discussed in the literature) where Beauty is always truthfully told the day of the week after choosing her betting odds, to see if she then adjusts her betting odds. (And to be explicit, in the trillion Beauty story, she’s always told truthfully whether she’s the first awakening or not, again to see if she changes her odds). Is that clearer?
The usual way this applies is in the standard problem where the coin is known to be unbiased. Typically, a person arguing for the 2⁄3 case says that Beauty should shift to 1⁄2 on learning it is Monday. Whereas a critic originally arguing for the 1⁄2 case says that Beauty should shift to 1⁄3 for Tails (2/3 for Heads) on learning it is Monday.
The difficulty is that both those answers give something very presumptuous in the trillion Beauty limit (near certainty of Tails before the shift, or near certainty of Heads after the shift).
Nick Bostrom has argued for a “hybrid” solution which avoids the shift, but on the face of things looks inconsistent with Bayesian updating. But the idea is that Beauty might be in a different “reference class” before and after learning the day.
OK, fair enough—I didn’t specify how she acquired that knowledge, and I wasn’t assuming a clever method. I was just considering a variant of the story (often discussed in the literature) where Beauty is always truthfully told the day of the week after choosing her betting odds, to see if she then adjusts her betting odds. (And to be explicit, in the trillion Beauty story, she’s always told truthfully whether she’s the first awakening or not, again to see if she changes her odds). Is that clearer?
Yes, I wasn’t aware “Truthfully tell on all days” was a standard assumption for receiving that information, thank you for the clarification.
It’s OK.
The usual way this applies is in the standard problem where the coin is known to be unbiased. Typically, a person arguing for the 2⁄3 case says that Beauty should shift to 1⁄2 on learning it is Monday. Whereas a critic originally arguing for the 1⁄2 case says that Beauty should shift to 1⁄3 for Tails (2/3 for Heads) on learning it is Monday.
The difficulty is that both those answers give something very presumptuous in the trillion Beauty limit (near certainty of Tails before the shift, or near certainty of Heads after the shift).
Nick Bostrom has argued for a “hybrid” solution which avoids the shift, but on the face of things looks inconsistent with Bayesian updating. But the idea is that Beauty might be in a different “reference class” before and after learning the day.
See http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0011/5132/sleeping_beauty.pdf or http://www.nickbostrom.com/ (Right hand column, about halfway down the page).