SSA: in every universe, the average score is as good as can be.
SIA: for every observer, the score is as good as can be.
After rereading, it occurred to me that the difference could be illustrated by the Sleeping beauty example, where she earns money after many runs of the experiment if she correctly predicts the outcome. In SIA setup, she gets money every time she is right, so she earns both on Monday and Tuesday and thus eventually she will collect more and more. In SSA setup, she earns money only if there is a correct prediction of coin for the whole experiment, not for separate days, and she doesn’t earn money on average.
I have an idea about non-fiction example of sleeping beauty—do you think it will be a correct implementation:
A person is in a room with many people. He flips a coin, and if it is heads he asks one random person to guess is it head or tails. If it is tails, he asked two random people the same question. Other people can’t observer how many peoples were asked.
After rereading, it occurred to me that the difference could be illustrated by the Sleeping beauty example, where she earns money after many runs of the experiment if she correctly predicts the outcome. In SIA setup, she gets money every time she is right, so she earns both on Monday and Tuesday and thus eventually she will collect more and more. In SSA setup, she earns money only if there is a correct prediction of coin for the whole experiment, not for separate days, and she doesn’t earn money on average.
Yep ^_^
See 3.1 in my old tech report: https://www.fhi.ox.ac.uk/wp-content/uploads/Anthropic_Decision_Theory_Tech_Report.pdf
I have an idea about non-fiction example of sleeping beauty—do you think it will be a correct implementation:
A person is in a room with many people. He flips a coin, and if it is heads he asks one random person to guess is it head or tails. If it is tails, he asked two random people the same question. Other people can’t observer how many peoples were asked.
See https://www.lesswrong.com/posts/YZzoWGCJsoRBBbmQg/solve-psy-kosh-s-non-anthropic-problem
You’re rediscovering some classics ^_^
That problem addresses some of the issues in anthropic reasoning—but not all.