Nick: It seems like a bad idea to me to call a prediction underconfident or overconfident depending on the particular outcome. Shouldn’t it depend rather on the “correct” distribution of outcomes, i.e. the Bayesian posterior taking all your information into account? I mean, with your definition, if we do the coin flip again, with 99% heads and 1% tails, and our prediction is 99% heads and 1% tails, then if it comes up heads we’re slightly underconfident, and if it comes up tails we’re strongly overconfident. Hence there’s no such thing as an actually well-calibrated prediction for this (?). If we take into account the existence of a correct Bayesian posterior then it’s clear that “expected calibration” is not at all 0. For instance if p is the “correct” probability of heads and q is your prediction then the “expected calibration” would seem to be -plog(q)-(1-p)log(1-q)+qlog(q)+(1-q)log(1-q). And, for instance, if you know for a fact that a certain experiment can go one of 3 ways, and over a long period of time the proportion has been 60%-30%-10%, then not only 33.3%-33.3%-33.3%, but also 45%-45%-10% and 57%-19%-24% have “expected calibration” ~0 by this definition.
Nick: It seems like a bad idea to me to call a prediction underconfident or overconfident depending on the particular outcome. Shouldn’t it depend rather on the “correct” distribution of outcomes, i.e. the Bayesian posterior taking all your information into account? I mean, with your definition, if we do the coin flip again, with 99% heads and 1% tails, and our prediction is 99% heads and 1% tails, then if it comes up heads we’re slightly underconfident, and if it comes up tails we’re strongly overconfident. Hence there’s no such thing as an actually well-calibrated prediction for this (?). If we take into account the existence of a correct Bayesian posterior then it’s clear that “expected calibration” is not at all 0. For instance if p is the “correct” probability of heads and q is your prediction then the “expected calibration” would seem to be -plog(q)-(1-p)log(1-q)+qlog(q)+(1-q)log(1-q). And, for instance, if you know for a fact that a certain experiment can go one of 3 ways, and over a long period of time the proportion has been 60%-30%-10%, then not only 33.3%-33.3%-33.3%, but also 45%-45%-10% and 57%-19%-24% have “expected calibration” ~0 by this definition.