The following was comment from discussion of the original post, and I thought it was great. Does anyone know how I might go about designing and practicing such calibration tests well?
“I’d suggest that there is a relatively straightforward and unproblematic place to apply humility: to overcome the universal overconfidence bias. Many studies have found that when asked to give estimates with a confidence interval, error rates are far higher than would be expected if the confidence interval were accurate. Many of these find errors an order of magnitude or more than subjects expected.
You could take self-tests and find out what your overconfidence level is, then develop a calibration scale to correct your estimates. You could then use this to modify your confidence levels on future guesses and approach an unbiased estimate.
One risk is that knowing that you are going to modify your intuitive or even logically-deduced confidence level may interfere with your initial guess. This might go in either direction, depending on your personality. It could be that knowing you are going to increase your error estimate will motivate you to subconsciously decrease your initial error estimate, so as to neutralize the anticipated adjustment. Or in the other direction, it could be that knowing that you always guess too low an error will cause you to raise your error guesses, so that your correction factor is too high.
However both of these could be dealt with in time by re-taking tests while applying your error calibration, adjusting it as needed.”
The following was comment from discussion of the original post, and I thought it was great. Does anyone know how I might go about designing and practicing such calibration tests well?