That’s a reasonable point, although I still have two major criticisms of it.
What is your resolution to the confusion about how anthropic reasoning should be applied, and to the various potential absurdities that seem to come from it? Non-anthropic probabilities do not have this problem, but anthropic probabilities definitely do.
How can anthropic probability be the “right way” to solve the Sleeping Beauty problem if it lacks the universality of methods like UDT?
1 - I don’t have a general solution, there are plenty of things I’m confused about—and certain cases where anthropic probability depends on your action are at the top of the list. There is a sense in which a certain extension of UDT can handle these cases if you “pre-chew” indexical utility functions into world-state utility functions for it (like a more sophisticated version of what’s described in this post, actually), but I’m not convinced that this is the last word.
Absurdity and confusion have a long (if slightly spotty) track record of indicating a lack in our understanding, rather than a lack of anything to understand.
2 - Same way that CDT gets the right answer on how much to pay for 50% chance of winning $1, even though CDT isn’t correct. The Sleeping Beauty problem is literally so simple that it’s within the zone of validity of CDT.
On 1), I agree that “pre-chewing” anthropic utility functions appears to be something of a hack. My current intuition in that regard is to reject the notion of anthropic utility (although not anthropic probability), but a solid formulation of anthropics could easily convince me otherwise.
On 2), if it’s within the zone of validity then I guess that’s sufficient to call something “a correct way” of solving the problem, but if there is an equally simple or simpler approach that has a strictly broader domain of validity I don’t think you can be justified in calling it “the right way”.
That’s a reasonable point, although I still have two major criticisms of it.
What is your resolution to the confusion about how anthropic reasoning should be applied, and to the various potential absurdities that seem to come from it? Non-anthropic probabilities do not have this problem, but anthropic probabilities definitely do.
How can anthropic probability be the “right way” to solve the Sleeping Beauty problem if it lacks the universality of methods like UDT?
1 - I don’t have a general solution, there are plenty of things I’m confused about—and certain cases where anthropic probability depends on your action are at the top of the list. There is a sense in which a certain extension of UDT can handle these cases if you “pre-chew” indexical utility functions into world-state utility functions for it (like a more sophisticated version of what’s described in this post, actually), but I’m not convinced that this is the last word.
Absurdity and confusion have a long (if slightly spotty) track record of indicating a lack in our understanding, rather than a lack of anything to understand.
2 - Same way that CDT gets the right answer on how much to pay for 50% chance of winning $1, even though CDT isn’t correct. The Sleeping Beauty problem is literally so simple that it’s within the zone of validity of CDT.
On 1), I agree that “pre-chewing” anthropic utility functions appears to be something of a hack. My current intuition in that regard is to reject the notion of anthropic utility (although not anthropic probability), but a solid formulation of anthropics could easily convince me otherwise.
On 2), if it’s within the zone of validity then I guess that’s sufficient to call something “a correct way” of solving the problem, but if there is an equally simple or simpler approach that has a strictly broader domain of validity I don’t think you can be justified in calling it “the right way”.