A forecaster is well-calibrated if, for every p∈[0,1], of the propositions that they assign probability approximately p to, the fraction of them that are true is approximately p. However, there is no natural probability distribution over propositions, so this notion is not well-defined.
Often, people aren’t even using an implicit probability distribution over propositions when they talk about calibration, and instead are refering to limiting densities over a particular sequence of propositions. For instance, a forecaster may be asked to predict every bit in a bitstream, and be judged well-calibrated if for every p∈[0,1], the fraction of the first n of the propositions that they assign probability approximately p to that are true approximately converges to p as n goes to infinity.
Calibration is not just a relationship between probability assignments and the truth, but a relationship between probability assignments, the truth, and some model for what it means to say that some percentage of a set of propositions is true. This model could be a probability distribution over propositions, or an explicit sequence of them. The dependence on what you mean by percentage of propositions is fairly dramatic.
For any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is guaranteed to be well-calibrated, no matter what the ground truth is. To make a sequence of propositions, all of which are given probability nm∈Q∩[0,1], and limiting frequency nm of which are true: First pick a sequence (Xi)i∈N of independent random variables, where each Xi is uniformly distributed on {1,...,m}. For each i∈N and S⊆{1,...,m} with |S|=n, let Pi,S be the proposition that Xi∈S. For each i, exactly nm of the propositions {Pi,S∣S⊆{1,...,m},|S|=n} are true, no matter what Xi actually is. So if you list {Pi,S∣i∈N,S⊆{1,...,m},|S|=n} in order of increasing i, then the fraction of them that are true converges to nm as you go along the list. Now you can create sequences of propositions with varying probabilities assigned to them, and well-calibrated on all probability levels, by interspersing these sequences for each rational number between 0 and 1.
Alternatively, for any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is extremely poorly calibrated, no matter what the ground truth is. To do this, let X be a random variable distributed uniformly on [0,1]. For each q∈Q∩[0,1], let Pq be the proposition that X<q. Pq gets assigned probability q. No matter what X actually is, for each p∈[0,1] (except for one, namely p=X), either all of the Pq for rational q≈p are true, or none of them are, rather than the desired fraction p of them. One might object to this example on the grounds that the reason for poor calibration is that every proposition that gets assigned probability approximately p is approximately the same proposition. But this is merely an extreme version of something that could realistically happen with real forecasting questions; sufficiently consequential events can have causal effects on a large fraction of questions a forecaster might predict, so a non-negligible-probability event may throw off even a good forecaster’s calibration via correlated effects across many questions.
The previous two sequences of propositions don’t have to be on different subjects; they could be different ways of organizing the exact same information. For instance, the propositions in the first example could be about the digits in the base-m expansion of the random variable in the second example. So this is more fundamental than just that people may be well calibrated on some topics but poorly calibrated on others; how well someone scores on a calibration test will depend on how the test is organized, not just on what information they’re forecasting.
So what are people actually measuring when they measure calibration? What it’s intended to measure is failure to form a coherent probability distribution at all, rather than any notion of how accurate a given probability distribution is. Forecasters don’t give explicit probability distributions over everything that could happen; they just attach numbers to a certain set of propositions, and these numbers are intended to be interpreted as probabilities. But if you produce these numbers by thinking about how qualitatively likely something is and then attempting to represent that with an appropriate-seeming number, then these aren’t likely to actually be the probabilities that those propositions have in any actual probability distribution. I.e. if you report probabilities for propositions by having some underlying probability distribution, and then applying some monotonicly increasing bijection f:[0,1]→[0,1] to probabilities of all propositions, then you will be poorly calibrated on sequences of propositions on which the original probability distribution are well-calibrated, and you won’t be reporting probabilities from any actual probability distribution, because, for instance, if you take three equally likely possible outcomes, exactly one of which must occur, then the probabilities you assign to each of these will be f(13), and their sum will be 3f(13), rather than 1. A suggested way to train to become calibrated is to try giving probabilities to large numbers of propositions whose truth-values can be checked later, and then calculate the fraction q of times the propositions you assigned probability approximately p to turned out to be true, so that next time you feel the level of confidence you previously assigned probability p to, you can now assign it probability q instead. If your reported probabilities were a monotonically increasing function f of the probabilities of some probability distribution which is well-calibrated on the given propositions, then this allows you to learn and undo the function f, so you can accurately report the probabilities from your underlying distribution. But if you already were reporting probabilities from a coherent probability distribution, and that probability distribution was poorly calibrated on the given propositions for whatever reason, then this will make you a worse forecaster, by making the probabilities you give not even form a coherent probability distribution. For example, if X and Y are independent random variables uniformly distributed on [0,1], then forecasting a bunch of propositions of the form X<p, and then discovering the true value of X, and adjusting so that you only ever give probabilities 0 or 1 so that you would have been well-calibrated on those questions, will make you worse at forecasting Y. So calibration training implicitly assumes that if you adjust your probabilities so that they are coherent, then they will be well-calibrated on the questions people tend to train on in practice. I expect this assumption is likely to be close to true, provided the questions being forecasted are sufficiently diverse that correlations between them don’t throw off calibration.
What is calibration?
A forecaster is well-calibrated if, for every p∈[0,1], of the propositions that they assign probability approximately p to, the fraction of them that are true is approximately p. However, there is no natural probability distribution over propositions, so this notion is not well-defined.
Often, people aren’t even using an implicit probability distribution over propositions when they talk about calibration, and instead are refering to limiting densities over a particular sequence of propositions. For instance, a forecaster may be asked to predict every bit in a bitstream, and be judged well-calibrated if for every p∈[0,1], the fraction of the first n of the propositions that they assign probability approximately p to that are true approximately converges to p as n goes to infinity.
Calibration is not just a relationship between probability assignments and the truth, but a relationship between probability assignments, the truth, and some model for what it means to say that some percentage of a set of propositions is true. This model could be a probability distribution over propositions, or an explicit sequence of them. The dependence on what you mean by percentage of propositions is fairly dramatic.
For any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is guaranteed to be well-calibrated, no matter what the ground truth is. To make a sequence of propositions, all of which are given probability nm∈Q∩[0,1], and limiting frequency nm of which are true: First pick a sequence (Xi)i∈N of independent random variables, where each Xi is uniformly distributed on {1,...,m}. For each i∈N and S⊆{1,...,m} with |S|=n, let Pi,S be the proposition that Xi∈S. For each i, exactly nm of the propositions {Pi,S∣S⊆{1,...,m},|S|=n} are true, no matter what Xi actually is. So if you list {Pi,S∣i∈N,S⊆{1,...,m},|S|=n} in order of increasing i, then the fraction of them that are true converges to nm as you go along the list. Now you can create sequences of propositions with varying probabilities assigned to them, and well-calibrated on all probability levels, by interspersing these sequences for each rational number between 0 and 1.
Alternatively, for any atomless probability measure, you can pick a sequence of propositions such that, in terms of limiting frequencies along the sequence, the probability distribution is extremely poorly calibrated, no matter what the ground truth is. To do this, let X be a random variable distributed uniformly on [0,1]. For each q∈Q∩[0,1], let Pq be the proposition that X<q. Pq gets assigned probability q. No matter what X actually is, for each p∈[0,1] (except for one, namely p=X), either all of the Pq for rational q≈p are true, or none of them are, rather than the desired fraction p of them. One might object to this example on the grounds that the reason for poor calibration is that every proposition that gets assigned probability approximately p is approximately the same proposition. But this is merely an extreme version of something that could realistically happen with real forecasting questions; sufficiently consequential events can have causal effects on a large fraction of questions a forecaster might predict, so a non-negligible-probability event may throw off even a good forecaster’s calibration via correlated effects across many questions.
The previous two sequences of propositions don’t have to be on different subjects; they could be different ways of organizing the exact same information. For instance, the propositions in the first example could be about the digits in the base-m expansion of the random variable in the second example. So this is more fundamental than just that people may be well calibrated on some topics but poorly calibrated on others; how well someone scores on a calibration test will depend on how the test is organized, not just on what information they’re forecasting.
So what are people actually measuring when they measure calibration? What it’s intended to measure is failure to form a coherent probability distribution at all, rather than any notion of how accurate a given probability distribution is. Forecasters don’t give explicit probability distributions over everything that could happen; they just attach numbers to a certain set of propositions, and these numbers are intended to be interpreted as probabilities. But if you produce these numbers by thinking about how qualitatively likely something is and then attempting to represent that with an appropriate-seeming number, then these aren’t likely to actually be the probabilities that those propositions have in any actual probability distribution. I.e. if you report probabilities for propositions by having some underlying probability distribution, and then applying some monotonicly increasing bijection f:[0,1]→[0,1] to probabilities of all propositions, then you will be poorly calibrated on sequences of propositions on which the original probability distribution are well-calibrated, and you won’t be reporting probabilities from any actual probability distribution, because, for instance, if you take three equally likely possible outcomes, exactly one of which must occur, then the probabilities you assign to each of these will be f(13), and their sum will be 3f(13), rather than 1. A suggested way to train to become calibrated is to try giving probabilities to large numbers of propositions whose truth-values can be checked later, and then calculate the fraction q of times the propositions you assigned probability approximately p to turned out to be true, so that next time you feel the level of confidence you previously assigned probability p to, you can now assign it probability q instead. If your reported probabilities were a monotonically increasing function f of the probabilities of some probability distribution which is well-calibrated on the given propositions, then this allows you to learn and undo the function f, so you can accurately report the probabilities from your underlying distribution. But if you already were reporting probabilities from a coherent probability distribution, and that probability distribution was poorly calibrated on the given propositions for whatever reason, then this will make you a worse forecaster, by making the probabilities you give not even form a coherent probability distribution. For example, if X and Y are independent random variables uniformly distributed on [0,1], then forecasting a bunch of propositions of the form X<p, and then discovering the true value of X, and adjusting so that you only ever give probabilities 0 or 1 so that you would have been well-calibrated on those questions, will make you worse at forecasting Y. So calibration training implicitly assumes that if you adjust your probabilities so that they are coherent, then they will be well-calibrated on the questions people tend to train on in practice. I expect this assumption is likely to be close to true, provided the questions being forecasted are sufficiently diverse that correlations between them don’t throw off calibration.