I think what he means by “calibrated” is something like it not being possible for someone else to systematically improve the probabilities you give for the possible answers to a question just from knowing what values you’ve assigned (and your biases), without looking at what the question is.
I suppose the improvement would indeed be measured in terms of relative entropy of the “correct” guess with respect to the guess given.
I think what he means by “calibrated” is something like it not being possible for someone else to systematically improve the probabilities you give for the possible answers to a question just from knowing what values you’ve assigned (and your biases), without looking at what the question is.
I suppose the improvement would indeed be measured in terms of relative entropy of the “correct” guess with respect to the guess given.