While base rate neglect is a really interesting problem, I’ve always been a bit skeptical of it being at all relevant that doctors get it wrong so often. In real medical contexts, not everyone is getting tested for every disease; one is normally going to get tested if one has symptoms or have some risk factor. So in most circumstances, the actual chance that someone who is tested will have a false positive is lower than one would naively expect.
There are on the other hand other circumstances where this precise problem seems to show up (for example, when some people tried to push for mandatory premarital HIV testing).
In 2007, 160 gynecologists were provided with the relevant health statistics needed for calculating the chances that a woman with a positive mammogram test actually has cancer. The correct answer was about 10%. The majority of them grossly overestimated the probability of cancer, answering ‘‘90%’’ or ‘‘81%.’’
When most doctors are asked to interpret probabilistic lab results they suck. The doctors just don’t think that way. Instead they have learned what to say so that the patient will immediately take the next recommended step, i.e., get a biopsy. From the doctor’s perspective missing a cancer is a much worse outcome than needlessly worrying a patient. Their cached answer is “you have a high probability of cancer so a biopsy is needed immediately” which led to their guessing answers in the 80-90% range.
While base rate neglect is a really interesting problem, I’ve always been a bit skeptical of it being at all relevant that doctors get it wrong so often. In real medical contexts, not everyone is getting tested for every disease; one is normally going to get tested if one has symptoms or have some risk factor. So in most circumstances, the actual chance that someone who is tested will have a false positive is lower than one would naively expect.
There are on the other hand other circumstances where this precise problem seems to show up (for example, when some people tried to push for mandatory premarital HIV testing).
This occurs all the time.
http://www.psychologicalscience.org/journals/pspi/pspi_8_2_article.pdf
In 2007, 160 gynecologists were provided with the relevant health statistics needed for calculating the chances that a woman with a positive mammogram test actually has cancer. The correct answer was about 10%. The majority of them grossly overestimated the probability of cancer, answering ‘‘90%’’ or ‘‘81%.’’
When most doctors are asked to interpret probabilistic lab results they suck. The doctors just don’t think that way. Instead they have learned what to say so that the patient will immediately take the next recommended step, i.e., get a biopsy. From the doctor’s perspective missing a cancer is a much worse outcome than needlessly worrying a patient. Their cached answer is “you have a high probability of cancer so a biopsy is needed immediately” which led to their guessing answers in the 80-90% range.