I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a “dissolution” taking place, but merely that it works, which is a very important property to have.
One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes ‘the’ bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness.
In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there’s important content in comments I made replying to my own posts. Which is to say, they’re absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.
I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a “dissolution” taking place, but merely that it works, which is a very important property to have.
One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes ‘the’ bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness.
In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there’s important content in comments I made replying to my own posts. Which is to say, they’re absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.