Not directly related to the topic, but since you’re speaking of PredictionBook, there is a question I would like to ask : it seems from http://predictionbook.com/predictions that the PredictionBook crowd is mostly calibrated, on average, except on the extrema (100%/0%). How does that match with the “people are broadly overconfident” studies ? The two dataset seem contradictory to me. I notice I’m confused.
I could pop explanations like “people on prediction book are not representatives of people in general” or “the kind of predictions made on prediction book isn’t the same” but they sound more like rationalizations (popping an explanation with poor data backing it to avoid admitting confusion), so I don’t accept them.
Does anyone here has better answers (or data back to my “guesses”) on that data contradiction ?
Calibration is trainable. (I would hardly be engaged in it if the studies had shown overconfidence to be incorrigible.) BTW, much more surprising is that generating random numbers is also trainable if the subjects are given access to statistical tests of the quality of their randomness.
Hmmm. It seems likely that some people will be overconfident, and some will be underconfident.
I would guess that a new visitor to the site will more likely be overconfident than underconfident; that implies that the old visitors, those who have practiced a bit, may be slightly more likely to be underconfident than overconfident.
I thought through precisely those same explanations myself. Currently, I’m leaning towards overconfidence bias being one of those “biases” that is easy to reproduce in the artificial situations created in the laboratory, but that diminishes quickly with feedback (like would usually happen in the “real world”).
Not directly related to the topic, but since you’re speaking of PredictionBook, there is a question I would like to ask : it seems from http://predictionbook.com/predictions that the PredictionBook crowd is mostly calibrated, on average, except on the extrema (100%/0%). How does that match with the “people are broadly overconfident” studies ? The two dataset seem contradictory to me. I notice I’m confused.
I could pop explanations like “people on prediction book are not representatives of people in general” or “the kind of predictions made on prediction book isn’t the same” but they sound more like rationalizations (popping an explanation with poor data backing it to avoid admitting confusion), so I don’t accept them.
Does anyone here has better answers (or data back to my “guesses”) on that data contradiction ?
Calibration is trainable. (I would hardly be engaged in it if the studies had shown overconfidence to be incorrigible.) BTW, much more surprising is that generating random numbers is also trainable if the subjects are given access to statistical tests of the quality of their randomness.
Hmmm. It seems likely that some people will be overconfident, and some will be underconfident.
I would guess that a new visitor to the site will more likely be overconfident than underconfident; that implies that the old visitors, those who have practiced a bit, may be slightly more likely to be underconfident than overconfident.
I thought through precisely those same explanations myself. Currently, I’m leaning towards overconfidence bias being one of those “biases” that is easy to reproduce in the artificial situations created in the laboratory, but that diminishes quickly with feedback (like would usually happen in the “real world”).