Do you have some source for saying the log scoring rule should only be used when no anthropics are involved? Without that, what does it even mean to have a well-calibrated belief?
(BTW, there are other nice features of using the log-scoring rule, such as rewarding models that minimize their cross-entropy with the territory).
I mean, there’s nothing wrong with using the log scoring rule. But since the implied probabilities will change depending on how you aggregate the utilities, it doesn’t seem to me that it gets us any closer to a truly objective, consequence-free answer—‘objective probability’ is still meaningless here, it all depends on the bet structure.
Do you have some source for saying the log scoring rule should only be used when no anthropics are involved? Without that, what does it even mean to have a well-calibrated belief?
(BTW, there are other nice features of using the log-scoring rule, such as rewarding models that minimize their cross-entropy with the territory).
I mean, there’s nothing wrong with using the log scoring rule. But since the implied probabilities will change depending on how you aggregate the utilities, it doesn’t seem to me that it gets us any closer to a truly objective, consequence-free answer—‘objective probability’ is still meaningless here, it all depends on the bet structure.