The strongest argument against anthropic probabilities in decision-making comes from problems like the Absent-Minded Driver, in which the probabilities depend upon your decisions.
If anthropic probabilities don’t form part of a general-purpose decision theory, and you can get the right answers by simply taking the UDT approach and going straight to optimising outcomes given the strategies you could have, what use are the probabilities?
I won’t go so far as to say they’re meaningless, but without a general theory of when and how they should be used I definitely think the idea is suspect.
Probabilities have a foundation independent of decision theory, as encoding beliefs about events. They’re what you really do expect to see when you look outside.
This is an important note about the absent-minded driver problem et al, that gets lost if one gets comfortable in the effectiveness of UDT. The agent’s probabilities are still accurate, and still correspond to the frequency with which they see things (truly!) - but they’re no longer related to decision-making in quite the same way.
“The use” is then to predict, as accurately as ever, what you’ll see when you look outside yourself.
And yes, probabilities can sometimes depend on decisions, not only in some anthropic problems but more generally in Newcomb-like ones. Yes, the idea of having a single unqualified belief, before making a decision, doesn’t make much sense in these cases. But Sleeping Beauty is not one of these cases.
That’s a reasonable point, although I still have two major criticisms of it.
What is your resolution to the confusion about how anthropic reasoning should be applied, and to the various potential absurdities that seem to come from it? Non-anthropic probabilities do not have this problem, but anthropic probabilities definitely do.
How can anthropic probability be the “right way” to solve the Sleeping Beauty problem if it lacks the universality of methods like UDT?
1 - I don’t have a general solution, there are plenty of things I’m confused about—and certain cases where anthropic probability depends on your action are at the top of the list. There is a sense in which a certain extension of UDT can handle these cases if you “pre-chew” indexical utility functions into world-state utility functions for it (like a more sophisticated version of what’s described in this post, actually), but I’m not convinced that this is the last word.
Absurdity and confusion have a long (if slightly spotty) track record of indicating a lack in our understanding, rather than a lack of anything to understand.
2 - Same way that CDT gets the right answer on how much to pay for 50% chance of winning $1, even though CDT isn’t correct. The Sleeping Beauty problem is literally so simple that it’s within the zone of validity of CDT.
On 1), I agree that “pre-chewing” anthropic utility functions appears to be something of a hack. My current intuition in that regard is to reject the notion of anthropic utility (although not anthropic probability), but a solid formulation of anthropics could easily convince me otherwise.
On 2), if it’s within the zone of validity then I guess that’s sufficient to call something “a correct way” of solving the problem, but if there is an equally simple or simpler approach that has a strictly broader domain of validity I don’t think you can be justified in calling it “the right way”.
The strongest argument against anthropic probabilities in decision-making comes from problems like the Absent-Minded Driver, in which the probabilities depend upon your decisions.
If anthropic probabilities don’t form part of a general-purpose decision theory, and you can get the right answers by simply taking the UDT approach and going straight to optimising outcomes given the strategies you could have, what use are the probabilities?
I won’t go so far as to say they’re meaningless, but without a general theory of when and how they should be used I definitely think the idea is suspect.
Probabilities have a foundation independent of decision theory, as encoding beliefs about events. They’re what you really do expect to see when you look outside.
This is an important note about the absent-minded driver problem et al, that gets lost if one gets comfortable in the effectiveness of UDT. The agent’s probabilities are still accurate, and still correspond to the frequency with which they see things (truly!) - but they’re no longer related to decision-making in quite the same way.
“The use” is then to predict, as accurately as ever, what you’ll see when you look outside yourself.
And yes, probabilities can sometimes depend on decisions, not only in some anthropic problems but more generally in Newcomb-like ones. Yes, the idea of having a single unqualified belief, before making a decision, doesn’t make much sense in these cases. But Sleeping Beauty is not one of these cases.
That’s a reasonable point, although I still have two major criticisms of it.
What is your resolution to the confusion about how anthropic reasoning should be applied, and to the various potential absurdities that seem to come from it? Non-anthropic probabilities do not have this problem, but anthropic probabilities definitely do.
How can anthropic probability be the “right way” to solve the Sleeping Beauty problem if it lacks the universality of methods like UDT?
1 - I don’t have a general solution, there are plenty of things I’m confused about—and certain cases where anthropic probability depends on your action are at the top of the list. There is a sense in which a certain extension of UDT can handle these cases if you “pre-chew” indexical utility functions into world-state utility functions for it (like a more sophisticated version of what’s described in this post, actually), but I’m not convinced that this is the last word.
Absurdity and confusion have a long (if slightly spotty) track record of indicating a lack in our understanding, rather than a lack of anything to understand.
2 - Same way that CDT gets the right answer on how much to pay for 50% chance of winning $1, even though CDT isn’t correct. The Sleeping Beauty problem is literally so simple that it’s within the zone of validity of CDT.
On 1), I agree that “pre-chewing” anthropic utility functions appears to be something of a hack. My current intuition in that regard is to reject the notion of anthropic utility (although not anthropic probability), but a solid formulation of anthropics could easily convince me otherwise.
On 2), if it’s within the zone of validity then I guess that’s sufficient to call something “a correct way” of solving the problem, but if there is an equally simple or simpler approach that has a strictly broader domain of validity I don’t think you can be justified in calling it “the right way”.