Then by the temporal consistency axiom, this is indeed what her future copies will do.
They have information she doesn’t.
Suppose there are one trillion person-days if she wakes up once, and one trillion one if she wakes up twice. Specifically, her now, her on heads, and her on tails have three separate pieces of information.
The probability of being sleeping beauty on the day before the experiment is one in one trillion if the coin lands on heads, and one in one trillion one if it lands on tails. This gives an odds ration of 1.000000000001:1.
The probability of being sleeping beauty during the experiment is one in one trillion if the coin lands on heads, and two in one trillion one if it lands on tails (since there are then two days it could be). This gives an odds ratio of 1.000000000001:2.
Since her future self has different information, it makes perfect sense for her to make a different choice.
There is some disagreement on whether or not probability works that way. (This is technically not an understatement. Some people agree with me.) Suppose it doesn’t.
Assuming Sleeping beauty experiences exactly the same thing both days, she will get no additional and relevant information.
If the experiences aren’t exactly identical, she’s twice as likely to have a given experience if she wakes up twice. For example, if she rolls a die each time she wakes up, there’s a one in six chance of rolling a six at least once if she wakes up once, but a 11 in 36 chance if she wakes up twice.
Then by the temporal consistency axiom, this is indeed what her future copies will do.
They have information she doesn’t.
Then let’s improve the axiom to get rid of that potential issue. Change it to something like:
“If an agent at two different times has the same preferences, then the past version will never give up anything of value in order to change the conditional decision of its future version. Here, conditional decision means the mapping from information to decision.”
They have information she doesn’t.
Suppose there are one trillion person-days if she wakes up once, and one trillion one if she wakes up twice. Specifically, her now, her on heads, and her on tails have three separate pieces of information.
The probability of being sleeping beauty on the day before the experiment is one in one trillion if the coin lands on heads, and one in one trillion one if it lands on tails. This gives an odds ration of 1.000000000001:1.
The probability of being sleeping beauty during the experiment is one in one trillion if the coin lands on heads, and two in one trillion one if it lands on tails (since there are then two days it could be). This gives an odds ratio of 1.000000000001:2.
Since her future self has different information, it makes perfect sense for her to make a different choice.
There is some disagreement on whether or not probability works that way. (This is technically not an understatement. Some people agree with me.) Suppose it doesn’t.
Assuming Sleeping beauty experiences exactly the same thing both days, she will get no additional and relevant information.
If the experiences aren’t exactly identical, she’s twice as likely to have a given experience if she wakes up twice. For example, if she rolls a die each time she wakes up, there’s a one in six chance of rolling a six at least once if she wakes up once, but a 11 in 36 chance if she wakes up twice.
Then let’s improve the axiom to get rid of that potential issue. Change it to something like:
“If an agent at two different times has the same preferences, then the past version will never give up anything of value in order to change the conditional decision of its future version. Here, conditional decision means the mapping from information to decision.”