Perhaps a better example of a “fallacy” that is really just a mismatch between expected verses actual meanings is the example of how many doctors fail to accurately estimate the probability of false positives given a positive. It’s called the base rate fallacy and works like this (from here):
1% of women screened have breast cancer.
80% of women with breast cancer will get a positive result.
9.6% of women will get a false positive from the mammogram.
Given that a women has a positive result, what is the probability that she actually has breast cancer?
It turns out that the probability she has cancer is only 7.76%, but many doctors would over-estimate this.
Well, I argue that this is a mismatch between the intellectual way a mathematician would present the problem and the way a doctor experiences medicine. A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don’t have breast cancer. From their point of view, the occurrence of ’false positives”—i.e., results that were positive but false—is 92%. Yet on a pen and paper test they are told that they rate of false positives is 9.6% and they misinterpret this.
On the one hand, you could just explain very clearly what is meant by this other rate of false positives. Doctors are generally intelligent and can understand this little circumlocution. On the other hand, you could instead give them the more natural figure—the one that jives with experience; that 92% of positives are false, and remove the fallacy altogether.
I think that mathematicians have an obligation to use definitions that jive with experience (good mathematics always does?), especially instead of calling common sense “fallacious” when, actually, it is just being more Bayesian than frequentist.
The page you link includes the “9.6% false positive” usage, but that terminology is preceded by,
9.6% of women without breast cancer will also get positive mammographies
making the interpretation of the phrase clear.
The mismatch isn’t intellectual versus experiential in the way you claim. Most people get the problem right when the numbers are stated as frequencies relative to some large number instead of probabilities or percentages, i.e., when the wording primes people to think about counting members in a class.
Most people get the problem right when the numbers are stated as frequencies relative to some large number instead of probabilities or percentages, i.e., when the wording primes people to think about counting members in a class.
It’s still pretty scary that doctors would have to be primed to get basic statistical inference right (a skill that’s pretty essential to what they claim to do). The real world doesn’t hand you problems in neat, well-defined packages. You can guess the teacher’s password, but not Nature’s.
After I got into a warm discussion with some other members of the speech and debate club in high school, I started doing a little research into the field of medicine and its errors.
Long story short: doctors are not the experts most people (including many of them) believe them to be, our system of medicine is really screwed up, and it’s not even obvious that we derive a net benefit from medical intervention considered overall.
(It’s pretty obvious that some specific interventions are extremely important, but they’re quite basic and do not make up the majority of all interventions.)
A counterexample of my being right? Or a counterexample relating to medicine?
As in, “I have never encountered a doctor that actually understood the limits of his knowledge and how to appropriately use it, nor a clinical practice that wasn’t basically the blind leading the blind.”
Okay. I was unsure if your statement was meant to be a personal insult or a comment about medicine—your comments have cleared that up for me.
If I may offer a suggestion:
Access NewsBank from your local library, go to the “search America’s newspapers” option, and do some searching for the phrase “nasal radium”. There will be lots of duplication. You may find it useful to only search for articles written between 1990 and 1995, just to get a basic understanding of what it was.
Then realize that the vast majority of surgical treatments where introduced in pretty much the same way, and had the same amount of pre-testing, as nasal radium.
I don’t infer doctors’ actual performances from their responses to a word problem, so I’m not that scared. I don’t think byrnema was wrong to claim that
A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don’t have breast cancer.
Er, the whole point of statistical inference (and intelligence more generally) is that you can get the most knowledge from the least data. In other words, so you can figure stuff out before learning it “the hard way”. If doctors “eventually figure out” that most positives don’t actually mean cancer, that means poor performance (judged against professionals in general), not good performance!
“Eventually” was byrnema’s usage—I’d bet doctors are probably told outright the positive and negative predictive values of the tests by the test designers.
I see no disagreement. You are describing another way that the numbers could be presented so that it would be understood. I am not suggesting that doctors literally confuse the two ways of defining “false positive” but that the definition of false positive given is so far outside of experience, apparently, they are confused/mistaken about how to apply it correctly. My point is that if they actually needed it outside the exam once or twice (i.e., if the result was connected enough with experience to identify the correct or incorrect answer) they would readily learn how to do it.
You have asserted that the reason doctors can accurately tell patients their chances after a diagnostic test even if they perform poorly on the word problem is because they are confused about the term “false positive”. But the problem can be phrased without using the word “positive” at all, and people will still get it wrong if it’s phrased in terms of probabilities and get it right if it’s phrased in terms of relative frequencies. So the fact that doctors can tell patients their chances after a diagnostic test even if they perform poorly on the word problem has nothing to do with them being confused about false positives.
Perhaps a better example of a “fallacy” that is really just a mismatch between expected verses actual meanings is the example of how many doctors fail to accurately estimate the probability of false positives given a positive. It’s called the base rate fallacy and works like this (from here):
1% of women screened have breast cancer.
80% of women with breast cancer will get a positive result.
9.6% of women will get a false positive from the mammogram.
Given that a women has a positive result, what is the probability that she actually has breast cancer?
It turns out that the probability she has cancer is only 7.76%, but many doctors would over-estimate this.
Well, I argue that this is a mismatch between the intellectual way a mathematician would present the problem and the way a doctor experiences medicine. A doctor who frequently gives mammograms would certainly learn over time that most women who have a positive result don’t have breast cancer. From their point of view, the occurrence of ’false positives”—i.e., results that were positive but false—is 92%. Yet on a pen and paper test they are told that they rate of false positives is 9.6% and they misinterpret this.
On the one hand, you could just explain very clearly what is meant by this other rate of false positives. Doctors are generally intelligent and can understand this little circumlocution. On the other hand, you could instead give them the more natural figure—the one that jives with experience; that 92% of positives are false, and remove the fallacy altogether.
I think that mathematicians have an obligation to use definitions that jive with experience (good mathematics always does?), especially instead of calling common sense “fallacious” when, actually, it is just being more Bayesian than frequentist.
The page you link includes the “9.6% false positive” usage, but that terminology is preceded by,
making the interpretation of the phrase clear.
The mismatch isn’t intellectual versus experiential in the way you claim. Most people get the problem right when the numbers are stated as frequencies relative to some large number instead of probabilities or percentages, i.e., when the wording primes people to think about counting members in a class.
It’s still pretty scary that doctors would have to be primed to get basic statistical inference right (a skill that’s pretty essential to what they claim to do). The real world doesn’t hand you problems in neat, well-defined packages. You can guess the teacher’s password, but not Nature’s.
After I got into a warm discussion with some other members of the speech and debate club in high school, I started doing a little research into the field of medicine and its errors.
Long story short: doctors are not the experts most people (including many of them) believe them to be, our system of medicine is really screwed up, and it’s not even obvious that we derive a net benefit from medical intervention considered overall.
(It’s pretty obvious that some specific interventions are extremely important, but they’re quite basic and do not make up the majority of all interventions.)
I was about to lecture you on how wrong you are, until I realized I’ve never encountered a counterexample.
Please note that I do not rule out the possibility that we derive a net benefit. It’s just that it isn’t obvious that we do.
A counterexample of my being right? Or a counterexample relating to medicine?
As in, “I have never encountered a doctor that actually understood the limits of his knowledge and how to appropriately use it, nor a clinical practice that wasn’t basically the blind leading the blind.”
Okay. I was unsure if your statement was meant to be a personal insult or a comment about medicine—your comments have cleared that up for me.
If I may offer a suggestion:
Access NewsBank from your local library, go to the “search America’s newspapers” option, and do some searching for the phrase “nasal radium”. There will be lots of duplication. You may find it useful to only search for articles written between 1990 and 1995, just to get a basic understanding of what it was.
Then realize that the vast majority of surgical treatments where introduced in pretty much the same way, and had the same amount of pre-testing, as nasal radium.
I don’t infer doctors’ actual performances from their responses to a word problem, so I’m not that scared. I don’t think byrnema was wrong to claim that
Er, the whole point of statistical inference (and intelligence more generally) is that you can get the most knowledge from the least data. In other words, so you can figure stuff out before learning it “the hard way”. If doctors “eventually figure out” that most positives don’t actually mean cancer, that means poor performance (judged against professionals in general), not good performance!
“Eventually” was byrnema’s usage—I’d bet doctors are probably told outright the positive and negative predictive values of the tests by the test designers.
I see no disagreement. You are describing another way that the numbers could be presented so that it would be understood. I am not suggesting that doctors literally confuse the two ways of defining “false positive” but that the definition of false positive given is so far outside of experience, apparently, they are confused/mistaken about how to apply it correctly. My point is that if they actually needed it outside the exam once or twice (i.e., if the result was connected enough with experience to identify the correct or incorrect answer) they would readily learn how to do it.
You have asserted that the reason doctors can accurately tell patients their chances after a diagnostic test even if they perform poorly on the word problem is because they are confused about the term “false positive”. But the problem can be phrased without using the word “positive” at all, and people will still get it wrong if it’s phrased in terms of probabilities and get it right if it’s phrased in terms of relative frequencies. So the fact that doctors can tell patients their chances after a diagnostic test even if they perform poorly on the word problem has nothing to do with them being confused about false positives.