How so? I’d like to avoid such a fate if possible.
(I will agree that, as a constantly changing field, many things doctors learn is later disproven. ACE inhibitors use to be contraindicated in congestive heart failure, but now they’re first line. That’s not so much irrationality, though, but a lack of data.)
I was referring to mistakes in epistemic and decision theory.
Lack of a FDA supervised double blinded placebo controlled study evaluating a treatment does not mean “there is no evidence” the treatment works.
Failure to reject the null hypothesis for a statistic of a particular positive outcome measure over a particular set of patients for a particular treatment for a particular treatment regime does not imply that “the treatment does not work” or that “the treatment should not be tried”. Besides the multitude of ways this fails predictively for a particular case, it completely ignores cost and risk of both sides of treatment/no treatment, and so is crap as decision theory.
To briefly summarize, most doctors replace what could be an exercise in decision theory, including causal inference, process modeling, and decision theory tailored to the information relevant for a patient with an officially blessed lookup tables based on general population statistics.
I would probably feel better if doctors admitted this wasn’t a proper way to heal patients, but just the most convenientl way for doctors, the health care industry, and their government regulators, to dole out treatments to patients while protecting their income and control of patients. But I think they earnestly believe in this wholly suboptimal for the patient paradigm.
I’m sure many doctors do as you describe, but in my experience, most specialists physicians don’t fall into that trap. They will prescribe “un-proven” and un-approved treatments if they think the risk-benefit relationship is favorable. However, it takes significantly more knowledge about the disease, your specific patient, and all the latest research to make a decision like that. Furthermore, if you’re wrong, it’s your hide on the line. If your family doctor knows all that, then they’re a specialist.
The cost of treatment/no treatment: I’m going to disagree with you strongly there. That’s drilled into our head every day in school. The cost to the patient, the cost to society, the side-effects the patient experiences, the risk of serious adverse reactions, the risk of going without treatment, the chance that the treatment doesn’t even work at all (in the case of the unproven treatments)… we talk about this almost every day.
So, agreed: “Not proven beyond a wide margin of error” is not the same as “no evidence”, however I don’t think many doctors believe that. That is, it’s not a flaw of rationality—it is either a convenience thing, a lawsuit thing, or most often limits to the doctor’s knowledge.
I’m sure many doctors do as you describe, but in my experience, most specialists physicians don’t fall into that trap.
Specialists are a mixed bag. Some will think. Others will have their hammer, and every problem will be a nail on a conveyor belt. So some may be more adventurous, but I haven’t met any who seemed to have a decent grasp of statistics or decision theory.
Furthermore, if you’re wrong, it’s your hide on the line.
Yes. Everyone involved, from payers to regulators to manufacturers to care institutions to care providers to patients protect their own interests first. The problem for the patient is that his power is only negative—not seeking treatment or refusing treatment. All other entities can and do legally limit his options based on their interests.
If your family doctor knows all that, then they’re a specialist.
Or, maybe an old fashioned doctor in private practice who has some respect for the limits of his profession’s knowledge and some respect for the autonomy of a patient. As one doctor expressed it to me, “Generally we don’t know if a treatment is going to work. We try it and see.” I’ve found that they seem much more open to more speculative treatments than institutional care facilities, as one would expect.
The cost of treatment/no treatment: I’m going to disagree with you strongly there.
Yes, people do a lot of talking. But the rubber meets the road in what people do, not in their talky talk. When the cost accrues to the patient alone, that cost is at best a secondary consideration to all other actors who have the power to limit the patient’s options.
But I’d like to hear what’s the approved theoretical procedure. A patient comes in reporting a chronic problem. He has tried all the “standard options”. He is proposing an experimental treatment for it involving an off label use of a widely prescribed medication that is generally well tolerated but with potential side effects. He is basing this on anecdotal reports at web sites, pubmed articles, and wikipedia. In theory, what’s the approved method for evaluating this request by a patient? How are the potential risks and rewards tallied up to make a decision?
You disagree with me strongly here. I’d like to hear the generally approved decisions theory applied to this case.
“Not proven beyond a wide margin of error” is not the same as “no evidence”, however I don’t think many doctors believe that.
Doctors, regulators, and some people who think they’re being scientific will routinely say “there is no evidence for X” when there is plenty of evidence for X. When pressed on the matter by putting evidence in front of them they will disparage the evidence instead of admitting that they were making a false case that there was no evidence.
I have no doubt if they were strapped down in a chair, and had a gun held to their heads to find evidence, they could quickly come up with some evidence for X. When talking to other doctors, maybe they do say that the evidence is “not proven beyond a wide margin of error” instead of saying that there is “no evidence”″. But it’s so much more convenient to “shade” the truth when talking to patients since they know what’s better for them anyway. That it simplifies their problem, evades effort on their part, hides their ignorance, and paints them as an all knowing authority is purely coincidental.
So with a gun to their heads, it’s “the evidence isn’t as convincing as I’d like”. But in their practice, in life? “There is no evidence”, and that’s the belief that determines their actions. Which one do they really believe; the one they may never say, or the one they say a hundred times every day?
As you say, what’s done is what’s convenient. Convenience for doctors comes from simplified diagnoses and treatments, following his institution’s procedures and guidelines, avoiding legal liability, and avoiding hassles with insurance companies, regulators, and patients. Much more convenient to everyone but the patient to turn the job of healing a patient into a job of following rules, procedures, and guidelines. Naturally, it’s then convenient to convince the patient that all these rules, procedures, and guidelines are really the best way to heal him. They are not.
What is true is what is convenient to those in positions of power, and that extends to statistical, diagnostic, and treatment methods.
In general, I’d say worse.
It’s not that doctors are ignorant; it’s that they know so much that isn’t so.
How so? I’d like to avoid such a fate if possible. (I will agree that, as a constantly changing field, many things doctors learn is later disproven. ACE inhibitors use to be contraindicated in congestive heart failure, but now they’re first line. That’s not so much irrationality, though, but a lack of data.)
I was referring to mistakes in epistemic and decision theory.
Lack of a FDA supervised double blinded placebo controlled study evaluating a treatment does not mean “there is no evidence” the treatment works.
Failure to reject the null hypothesis for a statistic of a particular positive outcome measure over a particular set of patients for a particular treatment for a particular treatment regime does not imply that “the treatment does not work” or that “the treatment should not be tried”. Besides the multitude of ways this fails predictively for a particular case, it completely ignores cost and risk of both sides of treatment/no treatment, and so is crap as decision theory.
To briefly summarize, most doctors replace what could be an exercise in decision theory, including causal inference, process modeling, and decision theory tailored to the information relevant for a patient with an officially blessed lookup tables based on general population statistics.
I would probably feel better if doctors admitted this wasn’t a proper way to heal patients, but just the most convenientl way for doctors, the health care industry, and their government regulators, to dole out treatments to patients while protecting their income and control of patients. But I think they earnestly believe in this wholly suboptimal for the patient paradigm.
I’m sure many doctors do as you describe, but in my experience, most specialists physicians don’t fall into that trap. They will prescribe “un-proven” and un-approved treatments if they think the risk-benefit relationship is favorable. However, it takes significantly more knowledge about the disease, your specific patient, and all the latest research to make a decision like that. Furthermore, if you’re wrong, it’s your hide on the line. If your family doctor knows all that, then they’re a specialist.
The cost of treatment/no treatment: I’m going to disagree with you strongly there. That’s drilled into our head every day in school. The cost to the patient, the cost to society, the side-effects the patient experiences, the risk of serious adverse reactions, the risk of going without treatment, the chance that the treatment doesn’t even work at all (in the case of the unproven treatments)… we talk about this almost every day.
So, agreed: “Not proven beyond a wide margin of error” is not the same as “no evidence”, however I don’t think many doctors believe that. That is, it’s not a flaw of rationality—it is either a convenience thing, a lawsuit thing, or most often limits to the doctor’s knowledge.
Specialists are a mixed bag. Some will think. Others will have their hammer, and every problem will be a nail on a conveyor belt. So some may be more adventurous, but I haven’t met any who seemed to have a decent grasp of statistics or decision theory.
Yes. Everyone involved, from payers to regulators to manufacturers to care institutions to care providers to patients protect their own interests first. The problem for the patient is that his power is only negative—not seeking treatment or refusing treatment. All other entities can and do legally limit his options based on their interests.
Or, maybe an old fashioned doctor in private practice who has some respect for the limits of his profession’s knowledge and some respect for the autonomy of a patient. As one doctor expressed it to me, “Generally we don’t know if a treatment is going to work. We try it and see.” I’ve found that they seem much more open to more speculative treatments than institutional care facilities, as one would expect.
Yes, people do a lot of talking. But the rubber meets the road in what people do, not in their talky talk. When the cost accrues to the patient alone, that cost is at best a secondary consideration to all other actors who have the power to limit the patient’s options.
But I’d like to hear what’s the approved theoretical procedure. A patient comes in reporting a chronic problem. He has tried all the “standard options”. He is proposing an experimental treatment for it involving an off label use of a widely prescribed medication that is generally well tolerated but with potential side effects. He is basing this on anecdotal reports at web sites, pubmed articles, and wikipedia. In theory, what’s the approved method for evaluating this request by a patient? How are the potential risks and rewards tallied up to make a decision?
You disagree with me strongly here. I’d like to hear the generally approved decisions theory applied to this case.
Doctors, regulators, and some people who think they’re being scientific will routinely say “there is no evidence for X” when there is plenty of evidence for X. When pressed on the matter by putting evidence in front of them they will disparage the evidence instead of admitting that they were making a false case that there was no evidence.
I have no doubt if they were strapped down in a chair, and had a gun held to their heads to find evidence, they could quickly come up with some evidence for X. When talking to other doctors, maybe they do say that the evidence is “not proven beyond a wide margin of error” instead of saying that there is “no evidence”″. But it’s so much more convenient to “shade” the truth when talking to patients since they know what’s better for them anyway. That it simplifies their problem, evades effort on their part, hides their ignorance, and paints them as an all knowing authority is purely coincidental.
So with a gun to their heads, it’s “the evidence isn’t as convincing as I’d like”. But in their practice, in life? “There is no evidence”, and that’s the belief that determines their actions. Which one do they really believe; the one they may never say, or the one they say a hundred times every day?
As you say, what’s done is what’s convenient. Convenience for doctors comes from simplified diagnoses and treatments, following his institution’s procedures and guidelines, avoiding legal liability, and avoiding hassles with insurance companies, regulators, and patients. Much more convenient to everyone but the patient to turn the job of healing a patient into a job of following rules, procedures, and guidelines. Naturally, it’s then convenient to convince the patient that all these rules, procedures, and guidelines are really the best way to heal him. They are not.
What is true is what is convenient to those in positions of power, and that extends to statistical, diagnostic, and treatment methods.