[EDIT: the following is mistaken and the claim in OP was correct, though that wasn’t knowable from the publicly released data. See habryka’s comment.]
Many of the expert predictions were indeed crazily optimistic and had tiny error bars, but there’s a problem with the story. FiveThirtyEight mistakenly reported (and they still haven’t updated this!) that the March 16-17 survey asked experts about the number of cases reported on Covid Tracker on March 29, when in fact the study asked about March 23rd.
The correct number on the 23rd was 42,152. This was of course in line with the exponential extrapolation, and it was worse than the worst-case estimates of 13 out of 18 researchers, but at least their estimates show only typical levels of insanity and incompetence.
I think this is wrong. I’ve heard of multiple people who have reached out to the authors and illustrators for the article, who have said that the data is indeed correct, but wasn’t published in the survey. Here is the relevant tweet response:
Metaculus (me included) also did similarly poorly on the question of US case growth. Out of all Metaculus questions, this one was probably the one the community did worst on. Technically expert epidemiologists should know better than the hobbyists on Metaculus, but maybe it’s a bit unfair to rate expert competence based on that question in isolation.
What was surprising about it was mostly the testing ramp-up. The numbers were dominated by how much NY managed to increase their testing. I managed to overestimate the number of diagnosed cases in the Bay area, while still heavily underestimating the number of total cases in the US.
If you look at the community median at a similar date to the prediction by expert epidemiologists, it’s also off by a factor of 6 or so. (Not sure what the confidence intervals were, but most likely most people got negative points from early predictions.)
(For those interested, the Metaculus user “Jotto” collected more examples to compare Metaculus to expert forecasters. I think he might write a post about it or at least share thoughts in a Gdoc with people who would be interested.)
I mostly made my comment to point out that the particular question that’s being used as evidence for expert incompetence may have been unusually difficult to get right. So I don’t want to appear as though I’m confidently claiming that experts need a lesson on forecasting.
That said, I think some people would indeed become a bit better calibrated and we’d see wider confidence intervals from them in the future.
I think the main people who would do well to join Metaculus are people like Ioannidis or the Oxford CEBM people who sling out these unreasonably low IFR estimates. If you’re predicting all kinds of things about this virus 24⁄7 you’ll realize eventually that reality is not consistent with “this is at most mildly worse than the flu.”
This puts a new light on experts getting the predictions wrong. People are speculating that some of the California cases date back to January or even December. Similar stuff could have happened in New York. IMO, that’s the type of thing that makes sense to have outside one’s 95% confidence interval.
EDIT: OTOH it seems as though the infections only started in New York in February, and yet they spread to infect a large portion of the population there (tentative serology estimates say about 20% for the city). It doesn’t seem to be the case that the wide spread is explained by the infection in New York having started a lot earlier than expected. But something about this confuses me. If the infections reached the Bay area months earlier than they reached in New York, why is New York worse off? I guess one unusually thing about New York is how insanely little space they have inside restaurants and so on. Go to a California Starbucks and it’s awesome and comfortable. Go to a New York Starbucks (wasn’t it even invented there??) and you can’t even sit anywhere and there are walls all around you. Probably infections just spread way faster in that tightly crammed setting?
[EDIT: the following is mistaken and the claim in OP was correct, though that wasn’t knowable from the publicly released data. See habryka’s comment.]
Many of the expert predictions were indeed crazily optimistic and had tiny error bars, but there’s a problem with the story. FiveThirtyEight mistakenly reported (and they still haven’t updated this!) that the March 16-17 survey asked experts about the number of cases reported on Covid Tracker on March 29, when in fact the study asked about March 23rd.
The correct number on the 23rd was 42,152. This was of course in line with the exponential extrapolation, and it was worse than the worst-case estimates of 13 out of 18 researchers, but at least their estimates show only typical levels of insanity and incompetence.
I think this is wrong. I’ve heard of multiple people who have reached out to the authors and illustrators for the article, who have said that the data is indeed correct, but wasn’t published in the survey. Here is the relevant tweet response:
https://twitter.com/wiederkehra/status/1245040564392902659
Nice legwork! It’s insanity and incompetence on the part of the experts after all.
Metaculus (me included) also did similarly poorly on the question of US case growth. Out of all Metaculus questions, this one was probably the one the community did worst on. Technically expert epidemiologists should know better than the hobbyists on Metaculus, but maybe it’s a bit unfair to rate expert competence based on that question in isolation.
What was surprising about it was mostly the testing ramp-up. The numbers were dominated by how much NY managed to increase their testing. I managed to overestimate the number of diagnosed cases in the Bay area, while still heavily underestimating the number of total cases in the US.
This is the relevant Metaculus question: https://www.metaculus.com/questions/3712/how-many-total-confirmed-cases-of-novel-coronavirus-will-be-reported-in-the-who-region-of-the-americas-by-march-27/
If you look at the community median at a similar date to the prediction by expert epidemiologists, it’s also off by a factor of 6 or so. (Not sure what the confidence intervals were, but most likely most people got negative points from early predictions.)
(For those interested, the Metaculus user “Jotto” collected more examples to compare Metaculus to expert forecasters. I think he might write a post about it or at least share thoughts in a Gdoc with people who would be interested.)
What would you expect to happen if those experts started participating in Metaculus?
I mostly made my comment to point out that the particular question that’s being used as evidence for expert incompetence may have been unusually difficult to get right. So I don’t want to appear as though I’m confidently claiming that experts need a lesson on forecasting.
That said, I think some people would indeed become a bit better calibrated and we’d see wider confidence intervals from them in the future.
I think the main people who would do well to join Metaculus are people like Ioannidis or the Oxford CEBM people who sling out these unreasonably low IFR estimates. If you’re predicting all kinds of things about this virus 24⁄7 you’ll realize eventually that reality is not consistent with “this is at most mildly worse than the flu.”
Or an error in the editorial process that for some reason people are doubling down on. I do think that’s a serious option.
https://www.businessinsider.com/california-gov-newsom-orders-covid-19-autopsies-back-to-december-2020-4?r=US&IR=T
This puts a new light on experts getting the predictions wrong. People are speculating that some of the California cases date back to January or even December. Similar stuff could have happened in New York. IMO, that’s the type of thing that makes sense to have outside one’s 95% confidence interval.
EDIT: OTOH it seems as though the infections only started in New York in February, and yet they spread to infect a large portion of the population there (tentative serology estimates say about 20% for the city). It doesn’t seem to be the case that the wide spread is explained by the infection in New York having started a lot earlier than expected. But something about this confuses me. If the infections reached the Bay area months earlier than they reached in New York, why is New York worse off? I guess one unusually thing about New York is how insanely little space they have inside restaurants and so on. Go to a California Starbucks and it’s awesome and comfortable. Go to a New York Starbucks (wasn’t it even invented there??) and you can’t even sit anywhere and there are walls all around you. Probably infections just spread way faster in that tightly crammed setting?