Thanks for your response. I should have been clearer in my terminology. By “Iterated Sleeping Beauty” (ISB) I meant to name the variant that we here have been discussing for some time, that repeats the Standard Sleeping Beauty problem some number say 1000 of times. In 1000 coin tosses over 1000 weeks, the number of Heads awakenings is 1000 and the number of Tails awakenings is 2000. I have no catchy name for the variant I proposed, but I can make up an ugly one if nothing better comes to mind; it could be called Iterated Condensed Sleeping Beauty (ICSB). But I’ll assume you meant this particular variant of mine when you mention ISB.
You say
Q3. ISB is different from SSB as follows: more than one coin toss; same number of interviews regardless of result of coin toss
“More than one coin toss” is the iterated part. As far as I can see,
and I’ve argued it a couple times now, there’s no essential difference between SSB and ISB, so I meant to draw a comparison between my variant and ISB.
“Same number of interviews regardless of result of coin toss” isn’t correct. Sorry if I was unclear in my description. Beauty is interviewed once per toss when Heads, twice when Tails. This is the same in ICSB as in Standard and Iterated Sleeping Beauty. Is there an important difference between Standard Sleeping Beauty and Iterated Sleeping Beauty, or is there an important difference between Iterated Sleeping Beauty and Iterated Condensed Sleeping Beauty?
Q4. It makes a big difference. She has different information to condition
on. On a given coin flip, the probability of heads is 1⁄2. But, if it is
tails we skip a day before flipping again. Once she has been woken up a
large number of times, Beauty can easily calculate how likely it is that
heads was the most recent result of a coin flip.
We not only skip a day before tossing again, we interview on that day too!
I see how over time Beauty gains evidence corroborating the fairness of the
coin (that’s exactly my later rhetorical question), but assuming it’s a fair coin, and barring Type I errors, she’ll never see evidence to change her initial credence in that proposition. In view of this, can you explain how she can use this information to predict with better than initial accuracy the likelihood that Heads was the most recent outcome of the toss? I don’t see how.
In SSB, Tuesday&heads doesn’t exist, for example.
After relabeling Monday and Tuesday to Day 1 and Day 2 following the coin toss, Tuesday&Heads (H2) exists in none of these variants. So what difference is there?
Q1: I agree with you: 1⁄3, 1⁄3, 2⁄3
Good and well, but—are these legitimate credences? If not, why not? And
if so, why aren’t they also in the following:
Standard Iterated Sleeping Beauty is isomorphic to the following Markov
chain, which just subdivides the Tails state in my condensed variant into
Day 1 and Day 2:
[1/2, 1/2, 0]
[0, 0, 1]
[1/2, 1/2, 0]
operating on row vector of states [ Heads&Day1 Tails&Day1 Tails&Day2 ],
abbreviated to [ H1 T1 T2 ]
When I say isomorphic, I mean the distinct observable states of affairs are
the same, and the possible histories of transitions from awakening to next awakening are governed by the same transition probabilities.
So either there’s a reason why my 2-state Markov chain correctly models my
condensed variant that allows you to accept the 1⁄3 answers it computes,
that doesn’t apply to the three-state Markov chain and its 1⁄3 answers
(perhaps you came to those answers independently of my model), or else
there’s some reason why the three-state Markov chain doesn’t correctly model
the Iterated Sleeping Beauty process. Can you help me see where the difficulty may lie?
I’m struggling to see how ISB isn’t different from SSB in meaningful ways.
I assume you are referring to my variant, not what I’m calling Iterated Sleeping Beauty. If so, I’m kind of baffled by this statement, because under similarities, you just listed
fair coin
woken twice if Tails, once if Heads
epistemic state reset each day
With the emendation that 2) is per coin toss, and in 3) “each day” = “each awakening”, you have just listed three essential features that SSB, ISB and ICSB all have in common. It’s exactly those three things that define the SSB problem. I’m claiming that there aren’t any others. If you disagree, then please tell me what they are. Or if parts of my argument remain unclear, I can try to go into more detail.
Thanks for your response. I should have been clearer in my terminology. By “Iterated Sleeping Beauty” (ISB) I meant to name the variant that we here have been discussing for some time, that repeats the Standard Sleeping Beauty problem some number say 1000 of times. In 1000 coin tosses over 1000 weeks, the number of Heads awakenings is 1000 and the number of Tails awakenings is 2000. I have no catchy name for the variant I proposed, but I can make up an ugly one if nothing better comes to mind; it could be called Iterated Condensed Sleeping Beauty (ICSB). But I’ll assume you meant this particular variant of mine when you mention ISB.
You say
“More than one coin toss” is the iterated part. As far as I can see, and I’ve argued it a couple times now, there’s no essential difference between SSB and ISB, so I meant to draw a comparison between my variant and ISB.
“Same number of interviews regardless of result of coin toss” isn’t correct. Sorry if I was unclear in my description. Beauty is interviewed once per toss when Heads, twice when Tails. This is the same in ICSB as in Standard and Iterated Sleeping Beauty. Is there an important difference between Standard Sleeping Beauty and Iterated Sleeping Beauty, or is there an important difference between Iterated Sleeping Beauty and Iterated Condensed Sleeping Beauty?
We not only skip a day before tossing again, we interview on that day too! I see how over time Beauty gains evidence corroborating the fairness of the coin (that’s exactly my later rhetorical question), but assuming it’s a fair coin, and barring Type I errors, she’ll never see evidence to change her initial credence in that proposition. In view of this, can you explain how she can use this information to predict with better than initial accuracy the likelihood that Heads was the most recent outcome of the toss? I don’t see how.
After relabeling Monday and Tuesday to Day 1 and Day 2 following the coin toss, Tuesday&Heads (H2) exists in none of these variants. So what difference is there?
Good and well, but—are these legitimate credences? If not, why not? And if so, why aren’t they also in the following:
Standard Iterated Sleeping Beauty is isomorphic to the following Markov chain, which just subdivides the Tails state in my condensed variant into Day 1 and Day 2:
operating on row vector of states [ Heads&Day1 Tails&Day1 Tails&Day2 ], abbreviated to [ H1 T1 T2 ]
When I say isomorphic, I mean the distinct observable states of affairs are the same, and the possible histories of transitions from awakening to next awakening are governed by the same transition probabilities.
So either there’s a reason why my 2-state Markov chain correctly models my condensed variant that allows you to accept the 1⁄3 answers it computes, that doesn’t apply to the three-state Markov chain and its 1⁄3 answers (perhaps you came to those answers independently of my model), or else there’s some reason why the three-state Markov chain doesn’t correctly model the Iterated Sleeping Beauty process. Can you help me see where the difficulty may lie?
I assume you are referring to my variant, not what I’m calling Iterated Sleeping Beauty. If so, I’m kind of baffled by this statement, because under similarities, you just listed
fair coin
woken twice if Tails, once if Heads
epistemic state reset each day
With the emendation that 2) is per coin toss, and in 3) “each day” = “each awakening”, you have just listed three essential features that SSB, ISB and ICSB all have in common. It’s exactly those three things that define the SSB problem. I’m claiming that there aren’t any others. If you disagree, then please tell me what they are. Or if parts of my argument remain unclear, I can try to go into more detail.