In addition to your 1-6, I have also seen people use “overconfident” to mean something more like “behaving as though the process that generated a given probabilistic prediction was higher-quality (in terms of Brier score or the like) than it really is.”
In prediction market terms: putting more money than you should into the market for a given outcome, as distinct from any particular fact about the probabilit(ies) implied by your stake in that market.
For example, suppose there is some forecaster who predicts on a wide range of topics. And their forecasts are generally great across most topics (low Brier score, etc.). But there’s one particular topic area—I dunno, let’s say “east Asian politics”—where they are a much worse predictor, with a Brier score near random guessing. Nonetheless, they go on making forecasts about east Asian politics alongside their forecasts on other topics, without noting the difference in any way.
I could easily imagine this forecaster getting accused of being “overconfident about east Asian politics.” And if so, I would interpret the accusation to mean the thing I described in the first 2 paragraphs of this comment, rather than any of 1-6 in the OP.
Note that the objection here does not involve anything about the specific values of the forecaster’s distributions for east Asian politics—whether they are low or high, extreme or middling, flat or peaked, etc. This distinguishes it from all of 1-6 except for 4, and of course it’s also unrelated to 4.
The objection here is not that the probabilities suffer from some specific, correctable error like being too high or extreme. Rather, the objection is that forecaster should not be reporting these probabilities at all; or that they should only report them alongside some sort of disclaimer; or that they should report them as part of a bundle where they have “lower weight” than other forecasts, if we’re in a context like a prediction market where such a thing is possible.
In addition to your 1-6, I have also seen people use “overconfident” to mean something more like “behaving as though the process that generated a given probabilistic prediction was higher-quality (in terms of Brier score or the like) than it really is.”
In prediction market terms: putting more money than you should into the market for a given outcome, as distinct from any particular fact about the probabilit(ies) implied by your stake in that market.
For example, suppose there is some forecaster who predicts on a wide range of topics. And their forecasts are generally great across most topics (low Brier score, etc.). But there’s one particular topic area—I dunno, let’s say “east Asian politics”—where they are a much worse predictor, with a Brier score near random guessing. Nonetheless, they go on making forecasts about east Asian politics alongside their forecasts on other topics, without noting the difference in any way.
I could easily imagine this forecaster getting accused of being “overconfident about east Asian politics.” And if so, I would interpret the accusation to mean the thing I described in the first 2 paragraphs of this comment, rather than any of 1-6 in the OP.
Note that the objection here does not involve anything about the specific values of the forecaster’s distributions for east Asian politics—whether they are low or high, extreme or middling, flat or peaked, etc. This distinguishes it from all of 1-6 except for 4, and of course it’s also unrelated to 4.
The objection here is not that the probabilities suffer from some specific, correctable error like being too high or extreme. Rather, the objection is that forecaster should not be reporting these probabilities at all; or that they should only report them alongside some sort of disclaimer; or that they should report them as part of a bundle where they have “lower weight” than other forecasts, if we’re in a context like a prediction market where such a thing is possible.