A simple model of calibration training is that it helps you more honestly integrate whatever evidence is floating around in your brain pertaining to a subject. Whether a prediction is short-term or long-term ought to be less important than other aspects of the quality of that evidence. This model predicts that, for example, calibration training on short-term predictions about which one has very little evidence should improve calibration on long-term predictions about which one also has very little evidence.
And people regularly make both short- and long-term predictions on PredictionBook, so in 5 to 10 years…
A simple model of calibration training is that it helps you more honestly integrate whatever evidence is floating around in your brain pertaining to a subject. Whether a prediction is short-term or long-term ought to be less important than other aspects of the quality of that evidence. This model predicts that, for example, calibration training on short-term predictions about which one has very little evidence should improve calibration on long-term predictions about which one also has very little evidence.
And people regularly make both short- and long-term predictions on PredictionBook, so in 5 to 10 years…
Yes, I’ve been trying to make both short- and long-term predictions on PredictionBook.