Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?
Others have discussed the scientific evidence, but I’ll flesh out shminux’s comment with some anecdotal evidence. My father developed some lower back pain from his job as a pilot, and found that a chiropractor was immediately able to solve the problem; he recommends trying it out to people because a single session is low-risk but it should be obvious whether or not it’s working.
In general, statistical analysis of medical treatments runs into the issue that it’s easy to ask the question “did everyone given treatment X get better?” and difficult to ask the question “how can we tell who will get better and who won’t given treatment X?”, and the latter question is the one that tends to be practically useful.
Indeed. They call this “effect modification” in epi. I guess in some sense, this is just another guise for the curse of dimensionality in the context of determining causal effects. Lots of covariates might be relevant for why [X] helps you, but trials aren’t very large, and simple regression models people use probably aren’t right very often. So it’s hard to establish if E[Y | do(x), C] - E[Y | do(placebo), C] is appreciably different from 0 for a sufficiently multidimensional C.
edit: In case it is not clear, p(A | do(B=b), C) is defined to be p(A, C | do(B=b)) / p(C | do(B=b)) (under appropriate assumptions that preclude dividing by zero). This is one of the reasons I don’t like the do(.) notation so much. In counterfactual notation, we can make a distinction about whether C is under the interventional regime or not, that is in general p(A(b) | C(b)) is not equal to p(A(b) | C) (but it is for any pretreatment covariate C, because then p(C | do(B=b)) = p(C(b)) = p(C). That is, the future cannot affect the past.
In general, statistical analysis of medical treatments runs into the issue that it’s easy to ask the question “did everyone given treatment X get better?” and difficult to ask the question “how can we tell who will get better and who won’t given treatment X?”, and the latter question is the one that tends to be practically useful.
Anecdotes don’t answer the latter question any better. Not even if you’re given a statistically effective treatment and happen to improve. I suspect placebo is stronger the more dramatic the treatment.
Anecdotes don’t answer the latter question any better. Not even if you’re given a statistically effective treatment and happen to improve.
Agreed that anecdotes are single points of data with potentially unknown selection effects. Not sure if I agree about the second part; it seems like treatment successes vary in their obviousness and I suspect some component of that measurement will always be anecdotal.
When you can do statistics, of course, you should; Gendlin’s Focusing seems like a good example of the benefit of trying to figure out whether or not therapeutic success could be predicted (it could, and then they could target the success factor directly).
Interesting thing about anecdotal data is I tend to hugely overestimate my effect in making people better. To bring myself back to earth I look at the puny effects many treatments have been demonstrated to have and conclude many patients get better regardless of treatment. I certainly underestimate my faults too, but that’s a tougher nut to crack. I’ve also seen doctors overestimate other doctor’s faults; of course the failure to intervene with a treatment shown to avoid a bad outcome by a few percentage points killed the patient!
Others have discussed the scientific evidence, but I’ll flesh out shminux’s comment with some anecdotal evidence. My father developed some lower back pain from his job as a pilot, and found that a chiropractor was immediately able to solve the problem; he recommends trying it out to people because a single session is low-risk but it should be obvious whether or not it’s working.
In general, statistical analysis of medical treatments runs into the issue that it’s easy to ask the question “did everyone given treatment X get better?” and difficult to ask the question “how can we tell who will get better and who won’t given treatment X?”, and the latter question is the one that tends to be practically useful.
Indeed. They call this “effect modification” in epi. I guess in some sense, this is just another guise for the curse of dimensionality in the context of determining causal effects. Lots of covariates might be relevant for why [X] helps you, but trials aren’t very large, and simple regression models people use probably aren’t right very often. So it’s hard to establish if E[Y | do(x), C] - E[Y | do(placebo), C] is appreciably different from 0 for a sufficiently multidimensional C.
edit: In case it is not clear, p(A | do(B=b), C) is defined to be p(A, C | do(B=b)) / p(C | do(B=b)) (under appropriate assumptions that preclude dividing by zero). This is one of the reasons I don’t like the do(.) notation so much. In counterfactual notation, we can make a distinction about whether C is under the interventional regime or not, that is in general p(A(b) | C(b)) is not equal to p(A(b) | C) (but it is for any pretreatment covariate C, because then p(C | do(B=b)) = p(C(b)) = p(C). That is, the future cannot affect the past.
Anecdotes don’t answer the latter question any better. Not even if you’re given a statistically effective treatment and happen to improve. I suspect placebo is stronger the more dramatic the treatment.
Agreed that anecdotes are single points of data with potentially unknown selection effects. Not sure if I agree about the second part; it seems like treatment successes vary in their obviousness and I suspect some component of that measurement will always be anecdotal.
When you can do statistics, of course, you should; Gendlin’s Focusing seems like a good example of the benefit of trying to figure out whether or not therapeutic success could be predicted (it could, and then they could target the success factor directly).
Interesting thing about anecdotal data is I tend to hugely overestimate my effect in making people better. To bring myself back to earth I look at the puny effects many treatments have been demonstrated to have and conclude many patients get better regardless of treatment. I certainly underestimate my faults too, but that’s a tougher nut to crack. I’ve also seen doctors overestimate other doctor’s faults; of course the failure to intervene with a treatment shown to avoid a bad outcome by a few percentage points killed the patient!