There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one’s predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you’re bound to be underconfident in your ~A-type predictions.
Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don’t—in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as “predictions” whereas ~A-type beliefs don’t. Some potential factors in what counts as a “prediction”: belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.
There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one’s predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you’re bound to be underconfident in your ~A-type predictions.
Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don’t—in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as “predictions” whereas ~A-type beliefs don’t. Some potential factors in what counts as a “prediction”: belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.
There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one’s predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you’re bound to be underconfident in your ~A-type predictions.
Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don’t—in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as “predictions” whereas ~A-type beliefs don’t. Some potential factors in what counts as a “prediction”: belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.
Yeah, we have discussed this before.