In the Sleeping Beauty problem, SIA and SSA disagree on the probability that it’s Monday or Tuesday. But if we have to bet, then the optimal bet depends on what Ms Beauty is maximizing—the number of bet-instances that are correct, or whether the bet is correct, counting the two bets on different days as the same bet. Once the betting rules are clarified, there’s always only one optimal way to bet, regardless of whether you believe SIA or SSA.
Moreover, one of those bet scenarios leads to bets that give “implied beliefs” that follow SIA, and the other gives “implied beliefs” that follow SSA. This suggests that we should taboo the notion of “beliefs”, and instead talk only about optimal behavior. This is the “phenomenalist position” on Sleeping Beauty, if I understand correctly.
Question 1: Is this correct? Is this roughly the conclusion all those LW discussions a couple years ago came to?
Question 2: Does this completely resolve the issue, or must we still decide between SIA and SSA? Are there scenarios where optimal behavior depends on whether we believe SIA or SSA even after the exact betting rules have been specified?
I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a “dissolution” taking place, but merely that it works, which is a very important property to have.
One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes ‘the’ bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness.
In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there’s important content in comments I made replying to my own posts. Which is to say, they’re absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.
In the Sleeping Beauty problem, SIA and SSA disagree on the probability that it’s Monday or Tuesday. But if we have to bet, then the optimal bet depends on what Ms Beauty is maximizing—the number of bet-instances that are correct, or whether the bet is correct, counting the two bets on different days as the same bet. Once the betting rules are clarified, there’s always only one optimal way to bet, regardless of whether you believe SIA or SSA.
Moreover, one of those bet scenarios leads to bets that give “implied beliefs” that follow SIA, and the other gives “implied beliefs” that follow SSA. This suggests that we should taboo the notion of “beliefs”, and instead talk only about optimal behavior. This is the “phenomenalist position” on Sleeping Beauty, if I understand correctly.
Question 1: Is this correct? Is this roughly the conclusion all those LW discussions a couple years ago came to?
Question 2: Does this completely resolve the issue, or must we still decide between SIA and SSA? Are there scenarios where optimal behavior depends on whether we believe SIA or SSA even after the exact betting rules have been specified?
I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a “dissolution” taking place, but merely that it works, which is a very important property to have.
One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes ‘the’ bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness.
In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there’s important content in comments I made replying to my own posts. Which is to say, they’re absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.