Interesting idea, which I hope the testing centers are using (but hadn’t heard it, so maybe not!). The prior is very important. Initially, the likelihood of positive for each sample, but correlation can help to find optimal groupings. Which gives us a probability of positive for any arbitrary group. Bayes can tell us how to update the individual probabilities when a group tests positive.
I haven’t written the simulator, but my expectation is that if most of the samples have significant likelihood of positive, you don’t gain much by grouping—if you test binary search, you’ll waste more in the groupings than you gain in skipping individual tests when a group is negative. If positives are unlikely, you should be able to group such that you can maximize information by grouping such that there’s 50% to be negative for the group. Eliminate all the negatives at once, and regroup based on new probabilities (posterior of this test, prior for next) to repeat.
Eventually, you get down to individuals with 50% likelihood, and you kind of have to test each. Or declare that you treat 50% (or 30% or some other likeihood) “good enough to treat”, and skip the tests that won’t affect treatment.
I agree that if P≥50% then pooling is likely useless. We eventually want to be doing things like testing everyone who has come anywhere close to a known case, and/or testing absolutely everyone who has a fever, things like that. So if we do things right, we’re eventually hoping to be at P<5%, or maybe even P<<1% in the longer term. South Korea is commendably aggressive in testing, and they’re at P<5%, or something like that.
Interesting idea, which I hope the testing centers are using (but hadn’t heard it, so maybe not!). The prior is very important. Initially, the likelihood of positive for each sample, but correlation can help to find optimal groupings. Which gives us a probability of positive for any arbitrary group. Bayes can tell us how to update the individual probabilities when a group tests positive.
I haven’t written the simulator, but my expectation is that if most of the samples have significant likelihood of positive, you don’t gain much by grouping—if you test binary search, you’ll waste more in the groupings than you gain in skipping individual tests when a group is negative. If positives are unlikely, you should be able to group such that you can maximize information by grouping such that there’s 50% to be negative for the group. Eliminate all the negatives at once, and regroup based on new probabilities (posterior of this test, prior for next) to repeat.
Eventually, you get down to individuals with 50% likelihood, and you kind of have to test each. Or declare that you treat 50% (or 30% or some other likeihood) “good enough to treat”, and skip the tests that won’t affect treatment.
Thanks! Interesting thoughts.
I agree that if P≥50% then pooling is likely useless. We eventually want to be doing things like testing everyone who has come anywhere close to a known case, and/or testing absolutely everyone who has a fever, things like that. So if we do things right, we’re eventually hoping to be at P<5%, or maybe even P<<1% in the longer term. South Korea is commendably aggressive in testing, and they’re at P<5%, or something like that.