Count all the predictions that were assigned a 90% probability, and determine if the percentage that were correct is less than 90%? Repeat for all other probabilities?
Most of the predictions in this thread will turn out to have been overconfident
This is tough to score objectively because not all the predictions in this thread assign a numerical probability to the prediction statement.
Also, because of that whole P(¬X) = 1 - P(X) thing, any deviation from perfect calibration (whether under or overconfidence) is necessarily overconfidence (if not of that particular proposition, then the negation of that proposition).
There are ways of measuring overconfidence. People make declarations in a positive sense with a probability greater than 50%. They are overconfident in the sense that when framed that way, they assign too high a probability to the more likely outcome. This is also testable by a variety of metrics. For example, you could do a calculation where one assumes that there’s a betting market and everyone here has made a $1 even bet with their confidence as given in this thread. Then, if they are overconfident in the above sense, one expects that the total result over all bets will be a loss.
Right, I’m not denying that overconfidence bias exists and is a coherent concept. I was trying to point out that when we reinterpret the predictions to be more easily verified/falsified (like I have been doing before adding some of the predictions to PredictionBook) the prediction is transformed in a way that doesn’t necessarily preserve the original framing (whether positive or negative), so it would be unclear from the proposition we would be scoring whether or not the original predictor was under or overconfident.
Right; in fact, we can see pretty easily that just transferring the predictor’s probability to our better more precise predictions will intrinsically increase their apparent confidence. The point of our versions is to be narrower and better defined, and so we will judge our prediction correct in fewer states of the world than they would have judged their own prediction (independent of any biases); fewer states of the world means less confidence is justified (P(A&B) ⇐ P(A)).
Of course, in practice, due to the many biases and lack of experience afflicting them, people are usually horribly overconfident and we can see many examples of that in this thread and the past threads. So between the two, we can be pretty sure that predictors are overconfident.
I’m guessing its because usually when people using the phrase “that whole x thing”, x is a very simple term (usually one word), not an equation or one of the axioms of probability. Think, “that whole job thing” or “that whole guy thing”.
Most of the predictions in this thread will turn out to have been overconfident
The above prediction will turn out to have been overconfident.
All three predictions in this post will turn out to have been overconfident.
:p
Trying to work out if there are any falsification conditions for the above...
Count all the predictions that were assigned a 90% probability, and determine if the percentage that were correct is less than 90%? Repeat for all other probabilities?
This is tough to score objectively because not all the predictions in this thread assign a numerical probability to the prediction statement.
Also, because of that whole P(¬X) = 1 - P(X) thing, any deviation from perfect calibration (whether under or overconfidence) is necessarily overconfidence (if not of that particular proposition, then the negation of that proposition).
There are ways of measuring overconfidence. People make declarations in a positive sense with a probability greater than 50%. They are overconfident in the sense that when framed that way, they assign too high a probability to the more likely outcome. This is also testable by a variety of metrics. For example, you could do a calculation where one assumes that there’s a betting market and everyone here has made a $1 even bet with their confidence as given in this thread. Then, if they are overconfident in the above sense, one expects that the total result over all bets will be a loss.
Right, I’m not denying that overconfidence bias exists and is a coherent concept. I was trying to point out that when we reinterpret the predictions to be more easily verified/falsified (like I have been doing before adding some of the predictions to PredictionBook) the prediction is transformed in a way that doesn’t necessarily preserve the original framing (whether positive or negative), so it would be unclear from the proposition we would be scoring whether or not the original predictor was under or overconfident.
Right; in fact, we can see pretty easily that just transferring the predictor’s probability to our better more precise predictions will intrinsically increase their apparent confidence. The point of our versions is to be narrower and better defined, and so we will judge our prediction correct in fewer states of the world than they would have judged their own prediction (independent of any biases); fewer states of the world means less confidence is justified (P(A&B) ⇐ P(A)).
Of course, in practice, due to the many biases and lack of experience afflicting them, people are usually horribly overconfident and we can see many examples of that in this thread and the past threads. So between the two, we can be pretty sure that predictors are overconfident.
Hahahaha, nice word choice.
Is that supposed to be a joke? I don’t get it.
Wracking my brains over some humorous interpretation, all I can get is maybe ‘P(X)’ is supposed to sound like ‘penis’?
I’m guessing its because usually when people using the phrase “that whole x thing”, x is a very simple term (usually one word), not an equation or one of the axioms of probability. Think, “that whole job thing” or “that whole guy thing”.
Which explains why I didn’t find it funny: I’ve used “whole [half a dozen words] thing” myself.