I don’t think it undermines it. What matters is the relative frequency of true cases [1] vs false positives.
With less severe disease (e.g. symptomatic), we might have a frequency of 1% true cases in the population, plus 0.1% false-positive rate. The true cases greatly outnumber the false-positives.
In contrast, vaccinated death from Covid might be only 0.001% in the population, while false-positive deaths are 0.01%. Here the false-positives dominate.
So even though the absolute false-positive rate is lower in more severe cases (because it’s harder to misattribute deaths than get wrong test results), it still dominates the effectiveness results more because it’s larger than the rate of actual occurrences of the event.
[1] I say “true cases” deliberately instead of true-positives, because I mean to say the objective underlying frequency of the event, not true-positive detection rate.
You’ve given some toy numbers as a demonstration that the claim needn’t necessarily be undermined, but the question is whether it’s undermined by the actual numbers.
I thought about this for a while, and I think the entailment you point out is correct and we can’t be sure the numbers turn out as in my example.
But also, I think I got myself confused when writing the originally cited passage. I was thinking about how there will be a smaller absolute number of false-positive deaths than the absolute number of false-positive symptomatic cases, because there are fewer death generally. That doesn’t require the false-positive rates to be different to be true.
Also thinking about it, the mechanisms by which the false-positive rate would be lower on severe outcomes that I’d been thinking of don’t obviously hold. It’s probably more like if someone had a false-positive test and then had pneumonia symptoms, it’d be mistaken for Covid, and the rate of that happening is only dependent on the regular Covid test false-positive rate.
I don’t think it undermines it. What matters is the relative frequency of true cases [1] vs false positives.
With less severe disease (e.g. symptomatic), we might have a frequency of 1% true cases in the population, plus 0.1% false-positive rate. The true cases greatly outnumber the false-positives.
In contrast, vaccinated death from Covid might be only 0.001% in the population, while false-positive deaths are 0.01%. Here the false-positives dominate.
So even though the absolute false-positive rate is lower in more severe cases (because it’s harder to misattribute deaths than get wrong test results), it still dominates the effectiveness results more because it’s larger than the rate of actual occurrences of the event.
[1] I say “true cases” deliberately instead of true-positives, because I mean to say the objective underlying frequency of the event, not true-positive detection rate.
You’ve given some toy numbers as a demonstration that the claim needn’t necessarily be undermined, but the question is whether it’s undermined by the actual numbers.
I thought about this for a while, and I think the entailment you point out is correct and we can’t be sure the numbers turn out as in my example.
But also, I think I got myself confused when writing the originally cited passage. I was thinking about how there will be a smaller absolute number of false-positive deaths than the absolute number of false-positive symptomatic cases, because there are fewer death generally. That doesn’t require the false-positive rates to be different to be true.
Also thinking about it, the mechanisms by which the false-positive rate would be lower on severe outcomes that I’d been thinking of don’t obviously hold. It’s probably more like if someone had a false-positive test and then had pneumonia symptoms, it’d be mistaken for Covid, and the rate of that happening is only dependent on the regular Covid test false-positive rate.