Sleeping Beauty illustrates the consequences of following general epistemic principles. Merely finding an assignment of probabilities that’s optimal for a given way of measuring outcomes is appeal to consequences, on its own it doesn’t work as a general way of managing knowledge (though some general ways of managing knowledge might happen to assign probabilities so that the consequences are optimal, in a given example). In principle consequentialism makes superfluous any particular elements of agent design, including those pertaining to knowledge. But that observation doesn’t help with designing specific ways of working with knowledge.
My argument is that the log scoring rule is not just a “given way of measuring outcomes”. A belief that maximizes E(log(p)) is the definition of a proper Bayesian belief. There’s no appeal to consequence other than “SB’s beliefs are well calibrated”.
Isn’t this kind of circular? The justification for the logarithmic scoring rule is that it gets agents to report their true beliefs, in contexts where such beliefs clearly make sense(no anthropic weirdness, in particular), and where agents have utlities linear in money. Extending this as definition to situations where such beliefs don’t make sense seems arbitrary.
Do you have some source for saying the log scoring rule should only be used when no anthropics are involved? Without that, what does it even mean to have a well-calibrated belief?
(BTW, there are other nice features of using the log-scoring rule, such as rewarding models that minimize their cross-entropy with the territory).
I mean, there’s nothing wrong with using the log scoring rule. But since the implied probabilities will change depending on how you aggregate the utilities, it doesn’t seem to me that it gets us any closer to a truly objective, consequence-free answer—‘objective probability’ is still meaningless here, it all depends on the bet structure.
Sleeping Beauty illustrates the consequences of following general epistemic principles. Merely finding an assignment of probabilities that’s optimal for a given way of measuring outcomes is appeal to consequences, on its own it doesn’t work as a general way of managing knowledge (though some general ways of managing knowledge might happen to assign probabilities so that the consequences are optimal, in a given example). In principle consequentialism makes superfluous any particular elements of agent design, including those pertaining to knowledge. But that observation doesn’t help with designing specific ways of working with knowledge.
My argument is that the log scoring rule is not just a “given way of measuring outcomes”. A belief that maximizes E(log(p)) is the definition of a proper Bayesian belief. There’s no appeal to consequence other than “SB’s beliefs are well calibrated”.
Isn’t this kind of circular? The justification for the logarithmic scoring rule is that it gets agents to report their true beliefs, in contexts where such beliefs clearly make sense(no anthropic weirdness, in particular), and where agents have utlities linear in money. Extending this as definition to situations where such beliefs don’t make sense seems arbitrary.
Do you have some source for saying the log scoring rule should only be used when no anthropics are involved? Without that, what does it even mean to have a well-calibrated belief?
(BTW, there are other nice features of using the log-scoring rule, such as rewarding models that minimize their cross-entropy with the territory).
I mean, there’s nothing wrong with using the log scoring rule. But since the implied probabilities will change depending on how you aggregate the utilities, it doesn’t seem to me that it gets us any closer to a truly objective, consequence-free answer—‘objective probability’ is still meaningless here, it all depends on the bet structure.