There are ways of measuring overconfidence. People make declarations in a positive sense with a probability greater than 50%. They are overconfident in the sense that when framed that way, they assign too high a probability to the more likely outcome. This is also testable by a variety of metrics. For example, you could do a calculation where one assumes that there’s a betting market and everyone here has made a $1 even bet with their confidence as given in this thread. Then, if they are overconfident in the above sense, one expects that the total result over all bets will be a loss.
Right, I’m not denying that overconfidence bias exists and is a coherent concept. I was trying to point out that when we reinterpret the predictions to be more easily verified/falsified (like I have been doing before adding some of the predictions to PredictionBook) the prediction is transformed in a way that doesn’t necessarily preserve the original framing (whether positive or negative), so it would be unclear from the proposition we would be scoring whether or not the original predictor was under or overconfident.
Right; in fact, we can see pretty easily that just transferring the predictor’s probability to our better more precise predictions will intrinsically increase their apparent confidence. The point of our versions is to be narrower and better defined, and so we will judge our prediction correct in fewer states of the world than they would have judged their own prediction (independent of any biases); fewer states of the world means less confidence is justified (P(A&B) ⇐ P(A)).
Of course, in practice, due to the many biases and lack of experience afflicting them, people are usually horribly overconfident and we can see many examples of that in this thread and the past threads. So between the two, we can be pretty sure that predictors are overconfident.
There are ways of measuring overconfidence. People make declarations in a positive sense with a probability greater than 50%. They are overconfident in the sense that when framed that way, they assign too high a probability to the more likely outcome. This is also testable by a variety of metrics. For example, you could do a calculation where one assumes that there’s a betting market and everyone here has made a $1 even bet with their confidence as given in this thread. Then, if they are overconfident in the above sense, one expects that the total result over all bets will be a loss.
Right, I’m not denying that overconfidence bias exists and is a coherent concept. I was trying to point out that when we reinterpret the predictions to be more easily verified/falsified (like I have been doing before adding some of the predictions to PredictionBook) the prediction is transformed in a way that doesn’t necessarily preserve the original framing (whether positive or negative), so it would be unclear from the proposition we would be scoring whether or not the original predictor was under or overconfident.
Right; in fact, we can see pretty easily that just transferring the predictor’s probability to our better more precise predictions will intrinsically increase their apparent confidence. The point of our versions is to be narrower and better defined, and so we will judge our prediction correct in fewer states of the world than they would have judged their own prediction (independent of any biases); fewer states of the world means less confidence is justified (P(A&B) ⇐ P(A)).
Of course, in practice, due to the many biases and lack of experience afflicting them, people are usually horribly overconfident and we can see many examples of that in this thread and the past threads. So between the two, we can be pretty sure that predictors are overconfident.