Forecasters vary on at least three dimensions:
accuracy- as measured in (e.g.) average brier score over time (brier score is a measure of error where if you think (say) p is 0.7 likely and p turns out to be true, then your brier score on this forecast is (1 − 0.7)^2).
calibration—how close are they to perfect calibration where for any x, if they assign a probability of x% to a given statement, in x% of cases, they are right?
reliability—how much evidence does a given forecast of yours provide for the proposition in question being true? I think of this as “for a given confidence level c, whats the bayesfactor P(you say the probability of x is c|x)/P(you say the probability of x is c|not-x)?”
I wonder how these three properties relate to each other.
(A) Assume that you are perfectly calibrated at 90% and you say “It will rain today with 90% probability”—how should I update on your claim given I know your perfect calibration? My first intuition is that, given your perfect calibration,
P(you say rain with 90%|rain) is 90% and P(you say rain with 90%| no rain) is 10% likely. But that doesn’t follow from the fact that you are perfectly calibrated, does it? Does your calibration have any bearing at all on your reliability (apart from the fact that both positively correlate with forecasting competence)? If it doesn’t—why do we care about being calibrated?
(B) How does accuracy relate to reliability? Can infer something about your reliability from knowing your over-time brier score?
It would seem that by definition, perfectly calibrated forecasters are equally reliable, and among the perfectly calibrated forecasters the more accurate ones are those that are forecasting more extreme probabilities more often.