A Canadian man in his 70s died on 19 March, making him the ninth coronavirus-related death from the ship.[102][46] Two Japanese passengers in their 70s died on 22 March.[47]
--
I know that. If you follow this discussion up to the beginning, you’ll see that all I’m claiming is that the number of documented cases has been affected by selective bias, because asymptomatic / pre-symptomatic etc. cases are unlikely to be diagnosed.
Okay. I feel like the discussion is sometimes a bit weird because the claim that there are a lot of undocumented cases is something that both sides (high IFR or low IFR) agree on. The question is how large that portion is. You’re right to point to some sampling biases and so on, but the article under discussion estimates an IFR that it at least a factor 5 below that of other studies, and a factor of 4 (or 3.5 respectively) below what I think are defensible lower bounds based on analysis of South Korea or the cruise ship. I don’t think selection bias can explain this (at least not on the cruise ship; I agree that the hypothesis works for China’s numbers but my point is that it conflicts with other things we know). (And I already tried to adjust for selection bias with my personal lower bounds.)
I’m not saying this study is right. I’m just saying that, unless someone points a methodological flaw, “their conclusion is too different” is not a reason to discard it.
It depends on the reasoning. We have three data sets (there are more, but those three are the ones I’m most familiar with):
South Korea
The Diamond Princess
China
How much to count evidence from each data set depends on how much model uncertainty we have about the processes that generated the data, how fine-grained the reporting has been, and how large the sample sizes are. China is good on sample size but poor in every other respect. The cruise ship is poor on sample size but great in every other respect. South Korea is good in every respect.
If I get lower bounds of 0.4% and 0.35% from the first two examples, and someone writes a new paper on China (where model uncertainty is by far highest) and gets a conclusion that is 16x lower than some other reputable previous estimates (where BTW no one has pointed out a methodological flaw either so far), it doesn’t matter whether I can find a flaw in the study design or not. The conclusion is too implausible compared to the paucity of the data set that it’s from. It surely counts as some evidence and I’m inclined to move a bit closer to my lower bounds, all else equal, but for me it’s not enough to overthrow other things that I believe we already know.
I read about the new deaths on the Wikipedia article.
--
Okay. I feel like the discussion is sometimes a bit weird because the claim that there are a lot of undocumented cases is something that both sides (high IFR or low IFR) agree on. The question is how large that portion is. You’re right to point to some sampling biases and so on, but the article under discussion estimates an IFR that it at least a factor 5 below that of other studies, and a factor of 4 (or 3.5 respectively) below what I think are defensible lower bounds based on analysis of South Korea or the cruise ship. I don’t think selection bias can explain this (at least not on the cruise ship; I agree that the hypothesis works for China’s numbers but my point is that it conflicts with other things we know). (And I already tried to adjust for selection bias with my personal lower bounds.)
It depends on the reasoning. We have three data sets (there are more, but those three are the ones I’m most familiar with):
South Korea
The Diamond Princess
China
How much to count evidence from each data set depends on how much model uncertainty we have about the processes that generated the data, how fine-grained the reporting has been, and how large the sample sizes are. China is good on sample size but poor in every other respect. The cruise ship is poor on sample size but great in every other respect. South Korea is good in every respect.
If I get lower bounds of 0.4% and 0.35% from the first two examples, and someone writes a new paper on China (where model uncertainty is by far highest) and gets a conclusion that is 16x lower than some other reputable previous estimates (where BTW no one has pointed out a methodological flaw either so far), it doesn’t matter whether I can find a flaw in the study design or not. The conclusion is too implausible compared to the paucity of the data set that it’s from. It surely counts as some evidence and I’m inclined to move a bit closer to my lower bounds, all else equal, but for me it’s not enough to overthrow other things that I believe we already know.