How do we know that calibration training will improve our calibration on long-term predictions? It would seem that we necessarily have little evidence about the efficacy of short-term calibration training on calibrating long-term predictions.
We can’t know this with much confidence, but it seems likely to me. The reason is pretty simple: most people are wildly overconfident, and calibration training reduces people’s confidence in their predictions. It’s hard to be as underconfident as most people are overconfident, so calibration training should improve one’s accuracy in general. Indeed, several studies show calibration transfer between particular domains (i.e. calibration training in one domain improves one’s accuracy in another domain), though it’s true I’m not aware of a study showing specifically that calibration training with short-term predictions improves one’s accuracy with long-term predictions. But if that wasn’t the case, then this would be an exception to the general rule, and I don’t see a good reason to think it will turn out to be such an exception.
A simple model of calibration training is that it helps you more honestly integrate whatever evidence is floating around in your brain pertaining to a subject. Whether a prediction is short-term or long-term ought to be less important than other aspects of the quality of that evidence. This model predicts that, for example, calibration training on short-term predictions about which one has very little evidence should improve calibration on long-term predictions about which one also has very little evidence.
And people regularly make both short- and long-term predictions on PredictionBook, so in 5 to 10 years…
How do we know that calibration training will improve our calibration on long-term predictions? It would seem that we necessarily have little evidence about the efficacy of short-term calibration training on calibrating long-term predictions.
We can’t know this with much confidence, but it seems likely to me. The reason is pretty simple: most people are wildly overconfident, and calibration training reduces people’s confidence in their predictions. It’s hard to be as underconfident as most people are overconfident, so calibration training should improve one’s accuracy in general. Indeed, several studies show calibration transfer between particular domains (i.e. calibration training in one domain improves one’s accuracy in another domain), though it’s true I’m not aware of a study showing specifically that calibration training with short-term predictions improves one’s accuracy with long-term predictions. But if that wasn’t the case, then this would be an exception to the general rule, and I don’t see a good reason to think it will turn out to be such an exception.
A simple model of calibration training is that it helps you more honestly integrate whatever evidence is floating around in your brain pertaining to a subject. Whether a prediction is short-term or long-term ought to be less important than other aspects of the quality of that evidence. This model predicts that, for example, calibration training on short-term predictions about which one has very little evidence should improve calibration on long-term predictions about which one also has very little evidence.
And people regularly make both short- and long-term predictions on PredictionBook, so in 5 to 10 years…
Yes, I’ve been trying to make both short- and long-term predictions on PredictionBook.