We can’t know this with much confidence, but it seems likely to me. The reason is pretty simple: most people are wildly overconfident, and calibration training reduces people’s confidence in their predictions. It’s hard to be as underconfident as most people are overconfident, so calibration training should improve one’s accuracy in general. Indeed, several studies show calibration transfer between particular domains (i.e. calibration training in one domain improves one’s accuracy in another domain), though it’s true I’m not aware of a study showing specifically that calibration training with short-term predictions improves one’s accuracy with long-term predictions. But if that wasn’t the case, then this would be an exception to the general rule, and I don’t see a good reason to think it will turn out to be such an exception.
We can’t know this with much confidence, but it seems likely to me. The reason is pretty simple: most people are wildly overconfident, and calibration training reduces people’s confidence in their predictions. It’s hard to be as underconfident as most people are overconfident, so calibration training should improve one’s accuracy in general. Indeed, several studies show calibration transfer between particular domains (i.e. calibration training in one domain improves one’s accuracy in another domain), though it’s true I’m not aware of a study showing specifically that calibration training with short-term predictions improves one’s accuracy with long-term predictions. But if that wasn’t the case, then this would be an exception to the general rule, and I don’t see a good reason to think it will turn out to be such an exception.