I recently had a “duh” moment while reading an Atlantic article. Coronavirus tests are not screening tests! Like, didn’t we all learn about Bayesian probability, sensitivity, and the dangers of false positives and false negatives from a very similar question? And then, when I started reading about coronavirus test distribution in the news, I forgot all about that.
But I don’t know what the probabilities are. A brief search didn’t find them. Anyone know?
Expectations are tempered; a similar promise from Vice President Mike Pence of 1.5 million tests by the end of last week did not come to pass. But even when these tests eventually are available, some limitations will have to be realized. Among them, these are diagnostic tests, not screening tests—a distinction that should shape expectations about the role doctors will play in helping manage this viral disease.
The difference comes down to a metric known as sensitivity of the test: how many people who have the virus will indeed test positive. No medical test is perfect. Some are too sensitive, meaning that the result may say you’re infected when you’re actually not. Others aren’t sensitive enough, meaning they don’t detect something that is actually there.
The latter is the model for a diagnostic test. These tests can help to confirm that a sick person has the virus; but they can’t always tell you that a person does not. When people come into a clinic or hospital with severe flu-like symptoms, a positive test for the new coronavirus can seal the diagnosis. Screening mildly ill people for the presence of the virus is, however, a different challenge.
“The problem in a scenario like this is false negatives,” says Albert Ko, the chair of epidemiology of microbial diseases at the Yale School of Public Health. If you wanted to use a test to, for example, help you decide whether an elementary-school teacher can go back to work without infecting his whole class, you really need a test that will almost never miss the virus.
“The sensitivity can be less than 100 percent and still be very useful,” Ko says, in many cases. But as that number falls, so does the usefulness of any given result. In China, the sensitivity of tests has been reported to be as low as 30 to 60 percent—meaning roughly half of the people who actually had the virus had negative test results. Using repeated testing was found to increase the sensitivity to 71 percent. But that means a negative test still couldn’t fully reassure someone like the teacher that he definitely doesn’t have the virus. At that level of sensitivity, Ko says, “if you’re especially risk-averse, do you just say: ‘If you have a cold, stay home’?”
Coronavirus tests and probability
I recently had a “duh” moment while reading an Atlantic article. Coronavirus tests are not screening tests! Like, didn’t we all learn about Bayesian probability, sensitivity, and the dangers of false positives and false negatives from a very similar question? And then, when I started reading about coronavirus test distribution in the news, I forgot all about that.
But I don’t know what the probabilities are. A brief search didn’t find them. Anyone know?
https://www.theatlantic.com/health/archive/2020/03/where-do-you-go-if-you-get-coronavirus/607759/