Metaculus (me included) also did similarly poorly on the question of US case growth. Out of all Metaculus questions, this one was probably the one the community did worst on. Technically expert epidemiologists should know better than the hobbyists on Metaculus, but maybe it’s a bit unfair to rate expert competence based on that question in isolation.
What was surprising about it was mostly the testing ramp-up. The numbers were dominated by how much NY managed to increase their testing. I managed to overestimate the number of diagnosed cases in the Bay area, while still heavily underestimating the number of total cases in the US.
If you look at the community median at a similar date to the prediction by expert epidemiologists, it’s also off by a factor of 6 or so. (Not sure what the confidence intervals were, but most likely most people got negative points from early predictions.)
(For those interested, the Metaculus user “Jotto” collected more examples to compare Metaculus to expert forecasters. I think he might write a post about it or at least share thoughts in a Gdoc with people who would be interested.)
I mostly made my comment to point out that the particular question that’s being used as evidence for expert incompetence may have been unusually difficult to get right. So I don’t want to appear as though I’m confidently claiming that experts need a lesson on forecasting.
That said, I think some people would indeed become a bit better calibrated and we’d see wider confidence intervals from them in the future.
I think the main people who would do well to join Metaculus are people like Ioannidis or the Oxford CEBM people who sling out these unreasonably low IFR estimates. If you’re predicting all kinds of things about this virus 24⁄7 you’ll realize eventually that reality is not consistent with “this is at most mildly worse than the flu.”
Metaculus (me included) also did similarly poorly on the question of US case growth. Out of all Metaculus questions, this one was probably the one the community did worst on. Technically expert epidemiologists should know better than the hobbyists on Metaculus, but maybe it’s a bit unfair to rate expert competence based on that question in isolation.
What was surprising about it was mostly the testing ramp-up. The numbers were dominated by how much NY managed to increase their testing. I managed to overestimate the number of diagnosed cases in the Bay area, while still heavily underestimating the number of total cases in the US.
This is the relevant Metaculus question: https://www.metaculus.com/questions/3712/how-many-total-confirmed-cases-of-novel-coronavirus-will-be-reported-in-the-who-region-of-the-americas-by-march-27/
If you look at the community median at a similar date to the prediction by expert epidemiologists, it’s also off by a factor of 6 or so. (Not sure what the confidence intervals were, but most likely most people got negative points from early predictions.)
(For those interested, the Metaculus user “Jotto” collected more examples to compare Metaculus to expert forecasters. I think he might write a post about it or at least share thoughts in a Gdoc with people who would be interested.)
What would you expect to happen if those experts started participating in Metaculus?
I mostly made my comment to point out that the particular question that’s being used as evidence for expert incompetence may have been unusually difficult to get right. So I don’t want to appear as though I’m confidently claiming that experts need a lesson on forecasting.
That said, I think some people would indeed become a bit better calibrated and we’d see wider confidence intervals from them in the future.
I think the main people who would do well to join Metaculus are people like Ioannidis or the Oxford CEBM people who sling out these unreasonably low IFR estimates. If you’re predicting all kinds of things about this virus 24⁄7 you’ll realize eventually that reality is not consistent with “this is at most mildly worse than the flu.”