However, I think it is reasonable to at least add a calibration requirement: there should be no way to systematically correct estimates up or down as a function of the expected value.
Why is this important? If the thing with the highest score is always the best action to take, why does it matter if that score is an overestimate? Utility functions are fictional anyway right?
If there’s a systematic bias in the score, the thing with the highest score may not always be the best action to take. Calibrating the estimates may change the ranking of options.
For example, it could be that expected values above 0.99 are almost always significant overestimates, with an average true value of 0.5. A calibrated learner would observe this and systematically correct such items downwards. The new top choices would probably have values like 0.989 (if that’s the only correction applied).
This provides something of a guarantee that systematic Goodhart-type problems will eventually be recognized and corrected, to the extent which they occur.
A meta-rule like that, which corrects observed biases in the aggregate scores, isn’t easy to represent as a direct object-level hypothesis about the data. That’s why calibrated learning may not be Bayesian. And, without a calibration guarantee, you’d need some other argument as to why representing uncertainty helps to avoid Goodhart.
If there’s a systematic bias in the score, the thing with the highest score may not always be the best action to take. Calibrating the estimates may change the ranking of options.
For example, it could be that expected values above 0.99 are almost always significant overestimates, with an average true value of 0.5. A calibrated learner would observe this and systematically correct such items downwards. The new top choices would probably have values like 0.989 (if that’s the only correction applied).
This provides something of a guarantee that systematic Goodhart-type problems will eventually be recognized and corrected, to the extent which they occur.
A meta-rule like that, which corrects observed biases in the aggregate scores, isn’t easy to represent as a direct object-level hypothesis about the data. That’s why calibrated learning may not be Bayesian. And, without a calibration guarantee, you’d need some other argument as to why representing uncertainty helps to avoid Goodhart.