I think my current expectation of risk reduction from antigen tests is more like 20-60% than <10%, but I’ll also note that it matters a lot what your population is. In Elizabeth’s social circle my guess is that most people aren’t coming to parties if they’ve had any suspected positive contact, have any weak symptoms, etc, such that there’s a strong selection effect screening out the clearly-positive people. (Or like, imagine everyone with these risk factors takes an antigen test anyways—then requiring tests doesn’t add anything.)
I haven’t read this whole thread but for the record, I often agree with Michael Mina and think he does great original thinking about these topics, yet think in this case he’s just wrong with his extremely high estimates of antigen test sensitivity during contagion. I think his model on antigen tests specifically is theoretically great and a good extrapolation from a few decent assumptions, but just doesn’t match what we see on the ground.
We also see things like a clinical trial on QuickVue tests that shows only ~83% sensitivity. Admittedly other studies of antigen tests show ~98% sensitivity, but I think publication bias and results-desirability bias here means that if the clinical trial only shows 83%, then that’s decent evidence that studies finding higher are a bit flawed. I would not have guess they could get to 98% though so there’s something that doesn’t make sense here.
I know the standard heuristic is to trust scientific findings over anecdata, but I think in this case that should be reversed if you’re extremely scientifically literate and closely tracking things on the ground. Knowing all the things that can go wrong with even very careful scientific findings, I just don’t trust these studies claiming very high sensitivity much—I think they also contradict FDA data on Cue tests, data/anecdata about nasal+saliva tests working better than just nasal, etc.
(Maybe I’m preaching to the choir and you know most of this, given your range was 25-90%. But I guess I see pretty good evidence it can’t possibly be at the high end of that range.)
And yes, the specifics of the population make a huge difference. Honestly, I think that accounts for the breadth of my estimate range more than uncertainty about abstract test performance does.
I think my current expectation of risk reduction from antigen tests is more like 20-60% than <10%, but I’ll also note that it matters a lot what your population is. In Elizabeth’s social circle my guess is that most people aren’t coming to parties if they’ve had any suspected positive contact, have any weak symptoms, etc, such that there’s a strong selection effect screening out the clearly-positive people. (Or like, imagine everyone with these risk factors takes an antigen test anyways—then requiring tests doesn’t add anything.)
I haven’t read this whole thread but for the record, I often agree with Michael Mina and think he does great original thinking about these topics, yet think in this case he’s just wrong with his extremely high estimates of antigen test sensitivity during contagion. I think his model on antigen tests specifically is theoretically great and a good extrapolation from a few decent assumptions, but just doesn’t match what we see on the ground.
For example, I’ve written before about how even PCRs seem to have 5-10% FNR in the hospitalized, and how PCR tests look even worse from anecdata. Antigen tests get baselined against PCR so will be at least this bad.
We also see things like a clinical trial on QuickVue tests that shows only ~83% sensitivity. Admittedly other studies of antigen tests show ~98% sensitivity, but I think publication bias and results-desirability bias here means that if the clinical trial only shows 83%, then that’s decent evidence that studies finding higher are a bit flawed. I would not have guess they could get to 98% though so there’s something that doesn’t make sense here.
I know the standard heuristic is to trust scientific findings over anecdata, but I think in this case that should be reversed if you’re extremely scientifically literate and closely tracking things on the ground. Knowing all the things that can go wrong with even very careful scientific findings, I just don’t trust these studies claiming very high sensitivity much—I think they also contradict FDA data on Cue tests, data/anecdata about nasal+saliva tests working better than just nasal, etc.
(Maybe I’m preaching to the choir and you know most of this, given your range was 25-90%. But I guess I see pretty good evidence it can’t possibly be at the high end of that range.)
That all makes complete sense.
And yes, the specifics of the population make a huge difference. Honestly, I think that accounts for the breadth of my estimate range more than uncertainty about abstract test performance does.