Metaculus successfully predicted that Russia will invade Ukraine:
It seems strange to me to say that when the “metaculus prediction” according to Metaculus was lower than 50%. It was the community prediction that was over 50%. The metaculus prediction weighs those people who are good at forecasting higher and it’s interesting that forecasting skill here meant on average seeing the likelihood to be less high.
I have an impression that the vast majority of pundits failed to predict it, including Matthew Yglesias and Scott Alexander.
Talking about the success and failure of predictions in a binary way is bad. Saying X happens with 40% likelihood doesn’t mean that if X indeed happens you failed to predict it.
If I would be a billionaire then a 40% chance of war would likely be enough for me to leave the potential warzone.
If government officials claim that a catastrophe is imminent, they are most likely right. If they claim that there is nothing to worry about (in spite of the #1 and #2), then they’re lying (e.g. to prevent panic).
I see no reason why you should draw such a conclusion from looking at a single example.
It seems strange to me to say that when the “metaculus prediction” according to Metaculus was lower than 50%. It was the community prediction that was over 50%.
You’re right, I mean the community prediction. I’ll fix the phrasing to avoid ambiguity.
Talking about the success and failure of predictions in a binary way is bad. Saying X happens with 40% likelihood doesn’t mean that if X indeed happens you failed to predict it.
For simplicity, I assume that if Pundit1 said 60% and Pundit2 said 40%, and the event actually happens, then Pundit1 was right about the future, and Pundit2 was wrong. And Pundin3 who said 10% was even more wrong. For an event with a binary outcome, I think it’s a simplistic but good-enough language.
But I agree, it would be better to use a quantitative measure (e.g. a Brier score). Not sure how to calculate it in this case.
If I would be a billionaire then a 40% chance of war would likely be enough for me to leave the potential warzone.
I agree, it is reasonable.
I see no reason why you should draw such a conclusion from looking at a single example.
The conclusions I’ve listed are not based solely on the single example of Ukraine. It’s rather a somewhat formalized intuition, inspired by everything I know about similar situations, from Chernobil to Covid to the global warming.
For simplicity, I assume that if Pundit1 said 60% and Pundit2 said 40%, and the event actually happens, then Pundit1 was right about the future, and Pundit2 was wrong.
No, neither of them was right or wrong. That’s just not how probabilities work and simplifying in that way confuses what’s going on.
It’s rather a somewhat formalized intuition, inspired by everything I know about similar situations, from Chernobil to Covid to the global warming.
If you want to draw more general conclusions you would have to look at those events where government officials made forecasts and then nothing happened like Iraqi WMDs as well.
No, neither of them was right or wrong. That’s just not how probabilities work and simplifying in that way confuses what’s going on.
By “wrong” here I mean “incorrectly predicted the future”. If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future. Perhaps the source of confusion here is my inability to precisely express ideas in English (I’m a non-native English speaker), and I apologize for that.
If you want to draw more general conclusions you would have to look at those events where government officials made forecasts and then nothing happened like Iraqi WMDs as well.
I agree, it’s an excellent idea. In general, it’s quite possible that some politicians would use the high risk of a catastrophe (real or fake) to achieve political goals.
No, neither of them was right or wrong. That’s just not how probabilities work and simplifying in that way confuses what’s going on.
By “wrong” here I mean “incorrectly predicted the future”. If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future.
Maybe an intuition pump for what I think Christian is pointing at:
Assuming you have a 6-faced die, and you predict that the probability that you next will roll a 6 and not one of the other faces is about 16.67%.
Then you roll the die, and the face with the 6 comes up on top.
Thanks! I think I now see the root of the confusion. These are two closely related but different tasks:
predicting the outcome of an event
estimating the probability of the outcome
In your example, the tasks could be completed as follows:
“the next roll will be a 6” (i.e I know it because the die is unfair)
“the probability of 6 is about 16.67%” (i.e I can correctly calculate it because the die is fair)
If one is trying to predict the future, one could fail either (or both) of the tasks.
In the situation there people were trying to predict if Russia invades Ukraine, some of them got the probability right, but failed to predict the actual outcome. And the aforementioned pundits failed both tasks (in my opinion), because for a well-informed person it was already clear that Russia will invade with the probability much higher than 40%.
It seems strange to me to say that when the “metaculus prediction” according to Metaculus was lower than 50%. It was the community prediction that was over 50%. The metaculus prediction weighs those people who are good at forecasting higher and it’s interesting that forecasting skill here meant on average seeing the likelihood to be less high.
Talking about the success and failure of predictions in a binary way is bad. Saying X happens with 40% likelihood doesn’t mean that if X indeed happens you failed to predict it.
If I would be a billionaire then a 40% chance of war would likely be enough for me to leave the potential warzone.
I see no reason why you should draw such a conclusion from looking at a single example.
You’re right, I mean the community prediction. I’ll fix the phrasing to avoid ambiguity.
For simplicity, I assume that if Pundit1 said 60% and Pundit2 said 40%, and the event actually happens, then Pundit1 was right about the future, and Pundit2 was wrong. And Pundin3 who said 10% was even more wrong. For an event with a binary outcome, I think it’s a simplistic but good-enough language.
But I agree, it would be better to use a quantitative measure (e.g. a Brier score). Not sure how to calculate it in this case.
I agree, it is reasonable.
The conclusions I’ve listed are not based solely on the single example of Ukraine. It’s rather a somewhat formalized intuition, inspired by everything I know about similar situations, from Chernobil to Covid to the global warming.
No, neither of them was right or wrong. That’s just not how probabilities work and simplifying in that way confuses what’s going on.
If you want to draw more general conclusions you would have to look at those events where government officials made forecasts and then nothing happened like Iraqi WMDs as well.
By “wrong” here I mean “incorrectly predicted the future”. If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future. Perhaps the source of confusion here is my inability to precisely express ideas in English (I’m a non-native English speaker), and I apologize for that.
I agree, it’s an excellent idea. In general, it’s quite possible that some politicians would use the high risk of a catastrophe (real or fake) to achieve political goals.
Maybe an intuition pump for what I think Christian is pointing at:
Assuming you have a 6-faced die, and you predict that the probability that you next will roll a 6 and not one of the other faces is about 16.67%.
Then you roll the die, and the face with the 6 comes up on top.
Was your prediction wrong?
Thanks! I think I now see the root of the confusion. These are two closely related but different tasks:
predicting the outcome of an event
estimating the probability of the outcome
In your example, the tasks could be completed as follows:
“the next roll will be a 6” (i.e I know it because the die is unfair)
“the probability of 6 is about 16.67%” (i.e I can correctly calculate it because the die is fair)
If one is trying to predict the future, one could fail either (or both) of the tasks.
In the situation there people were trying to predict if Russia invades Ukraine, some of them got the probability right, but failed to predict the actual outcome. And the aforementioned pundits failed both tasks (in my opinion), because for a well-informed person it was already clear that Russia will invade with the probability much higher than 40%.