They write “at the time of testing.” The study I cite followed up with what happened to patients.
Also relevant: In the last 5 days, 3 more people who had tested positive on the Diamond Princess died. And one person died two weeks ago but somehow it wasn’t reported for a while. So while my own estimates were based on the assumption that 7 / 700 people died, it’s now 11 / 700.
I noticed CDC claims 9 deaths from Diamond Princess, but I didn’t find support in their source. WHO is still counting 8 deaths. I guess you’re right, but I’d appreciate if you could provide the source.
They write “at the time of testing.” The study I cite followed up with what happened to patients.
I know that. If you follow this discussion up to the beginning, you’ll see that all I’m claiming is that the number of documented cases has been affected by selective bias, because asymptomatic / pre-symptomatic etc. cases are unlikely to be diagnosed.
Finally, I believe we both agree the current IFR is underestimating the true death rate, because many patients are still fighting for their lives. Actually, the authors of the preprint are not complete morons and estimate the “time-delayed IFR” in 0.12% (which I agree is too low), and make the following remark to explain the higher mortality in Wuhan:
These findings indicate that the death risk in Wuhan is estimated to be much higher than those in other areas, which is likely explained by hospital-based transmission [32]. Indeed, past nosocomial outbreaks have been reported to elevate the CFR associated with MERS and SARS outbreaks, where inpatients affected by underlying disease or seniors infected in the hospital setting have raised the CFR to values as high as 20% for a MERS outbreak.
I’m not saying this study is right. I’m just saying that, unless someone points a methodological flaw, “their conclusion is too different” is not a reason to discard it.
A Canadian man in his 70s died on 19 March, making him the ninth coronavirus-related death from the ship.[102][46] Two Japanese passengers in their 70s died on 22 March.[47]
--
I know that. If you follow this discussion up to the beginning, you’ll see that all I’m claiming is that the number of documented cases has been affected by selective bias, because asymptomatic / pre-symptomatic etc. cases are unlikely to be diagnosed.
Okay. I feel like the discussion is sometimes a bit weird because the claim that there are a lot of undocumented cases is something that both sides (high IFR or low IFR) agree on. The question is how large that portion is. You’re right to point to some sampling biases and so on, but the article under discussion estimates an IFR that it at least a factor 5 below that of other studies, and a factor of 4 (or 3.5 respectively) below what I think are defensible lower bounds based on analysis of South Korea or the cruise ship. I don’t think selection bias can explain this (at least not on the cruise ship; I agree that the hypothesis works for China’s numbers but my point is that it conflicts with other things we know). (And I already tried to adjust for selection bias with my personal lower bounds.)
I’m not saying this study is right. I’m just saying that, unless someone points a methodological flaw, “their conclusion is too different” is not a reason to discard it.
It depends on the reasoning. We have three data sets (there are more, but those three are the ones I’m most familiar with):
South Korea
The Diamond Princess
China
How much to count evidence from each data set depends on how much model uncertainty we have about the processes that generated the data, how fine-grained the reporting has been, and how large the sample sizes are. China is good on sample size but poor in every other respect. The cruise ship is poor on sample size but great in every other respect. South Korea is good in every respect.
If I get lower bounds of 0.4% and 0.35% from the first two examples, and someone writes a new paper on China (where model uncertainty is by far highest) and gets a conclusion that is 16x lower than some other reputable previous estimates (where BTW no one has pointed out a methodological flaw either so far), it doesn’t matter whether I can find a flaw in the study design or not. The conclusion is too implausible compared to the paucity of the data set that it’s from. It surely counts as some evidence and I’m inclined to move a bit closer to my lower bounds, all else equal, but for me it’s not enough to overthrow other things that I believe we already know.
They write “at the time of testing.” The study I cite followed up with what happened to patients.
Also relevant: In the last 5 days, 3 more people who had tested positive on the Diamond Princess died. And one person died two weeks ago but somehow it wasn’t reported for a while. So while my own estimates were based on the assumption that 7 / 700 people died, it’s now 11 / 700.
I noticed CDC claims 9 deaths from Diamond Princess, but I didn’t find support in their source. WHO is still counting 8 deaths. I guess you’re right, but I’d appreciate if you could provide the source.
I know that. If you follow this discussion up to the beginning, you’ll see that all I’m claiming is that the number of documented cases has been affected by selective bias, because asymptomatic / pre-symptomatic etc. cases are unlikely to be diagnosed.
Finally, I believe we both agree the current IFR is underestimating the true death rate, because many patients are still fighting for their lives. Actually, the authors of the preprint are not complete morons and estimate the “time-delayed IFR” in 0.12% (which I agree is too low), and make the following remark to explain the higher mortality in Wuhan:
I’m not saying this study is right. I’m just saying that, unless someone points a methodological flaw, “their conclusion is too different” is not a reason to discard it.
I read about the new deaths on the Wikipedia article.
--
Okay. I feel like the discussion is sometimes a bit weird because the claim that there are a lot of undocumented cases is something that both sides (high IFR or low IFR) agree on. The question is how large that portion is. You’re right to point to some sampling biases and so on, but the article under discussion estimates an IFR that it at least a factor 5 below that of other studies, and a factor of 4 (or 3.5 respectively) below what I think are defensible lower bounds based on analysis of South Korea or the cruise ship. I don’t think selection bias can explain this (at least not on the cruise ship; I agree that the hypothesis works for China’s numbers but my point is that it conflicts with other things we know). (And I already tried to adjust for selection bias with my personal lower bounds.)
It depends on the reasoning. We have three data sets (there are more, but those three are the ones I’m most familiar with):
South Korea
The Diamond Princess
China
How much to count evidence from each data set depends on how much model uncertainty we have about the processes that generated the data, how fine-grained the reporting has been, and how large the sample sizes are. China is good on sample size but poor in every other respect. The cruise ship is poor on sample size but great in every other respect. South Korea is good in every respect.
If I get lower bounds of 0.4% and 0.35% from the first two examples, and someone writes a new paper on China (where model uncertainty is by far highest) and gets a conclusion that is 16x lower than some other reputable previous estimates (where BTW no one has pointed out a methodological flaw either so far), it doesn’t matter whether I can find a flaw in the study design or not. The conclusion is too implausible compared to the paucity of the data set that it’s from. It surely counts as some evidence and I’m inclined to move a bit closer to my lower bounds, all else equal, but for me it’s not enough to overthrow other things that I believe we already know.