“Taken together, the calibration results of the previous post and the complete class theorem suggest (to me, anyway) that irrespective of one’s philosophical views on frequentism versus Bayesianism, perfect calibration is not possible in full generality for a rational decision-making agent.”
Huh? I feel like there’s a giant chunk missing just before this paragraph, which seems to have nothing to do with anything you said prior to it.
“Taken together, the calibration results of the previous post and the complete class theorem suggest (to me, anyway) that irrespective of one’s philosophical views on frequentism versus Bayesianism, perfect calibration is not possible in full generality for a rational decision-making agent.”
Huh? I feel like there’s a giant chunk missing just before this paragraph, which seems to have nothing to do with anything you said prior to it.