They’re technically not incorrect, but they are on the wrong side of the debate. It’s true that we can occasionally understand things without directly experimenting on them, but we could use more experiment, not less.
The interesting thing is often not if a treatment method works but how it compares to other methods. Afaik in cancer research often groups get different treatment that then gets compared.
Sadly it seems that correct statistical knowledge is not too widely spread in all places where needed. I read a book of german medical professors who dearly complained about that.
There is no need to slavishly follow one standard of testing. What would be awesome were a better understanding on how to get good results with the least effort (in case of medics: least ppl. treated ineffectively).
While more controlled experiments are undoubtedly a good thing, observational studies are often not useless, since one can often make a plausible argument for extracting causation from them. Sadly, the default state of causal analysis in medicine remains “use regression.”
Reminds me of the proposed double blind studies about the effectiveness of parachutes in preventing injuries while falling from great heights.
I thought it was trite, but here it is.
ETA: Posted this from work, didn’t realize it was paywalled. Here’s a pdf
Brilliantly done, no matter the point they were trying to make. The headings say it all...
Evidence based pride and observational prejudice
Natural history of gravitational challenge
The parachute and the healthy cohort effect
The medicalisation of free fall
Parachutes and the military industrial complex
A call to (broken) arms
They’re technically not incorrect, but they are on the wrong side of the debate. It’s true that we can occasionally understand things without directly experimenting on them, but we could use more experiment, not less.
If you say that all experiments have to be placebo controlled double blind experiments you aren’t advocating more experiments.
You are advocating that the resources get spread about over less experiments but that those experiments that are done have a higher standard. http://www.blog.sethroberts.net/2011/01/25/monocultures-of-evidence/
The interesting thing is often not if a treatment method works but how it compares to other methods. Afaik in cancer research often groups get different treatment that then gets compared. Sadly it seems that correct statistical knowledge is not too widely spread in all places where needed. I read a book of german medical professors who dearly complained about that. There is no need to slavishly follow one standard of testing. What would be awesome were a better understanding on how to get good results with the least effort (in case of medics: least ppl. treated ineffectively).
While more controlled experiments are undoubtedly a good thing, observational studies are often not useless, since one can often make a plausible argument for extracting causation from them. Sadly, the default state of causal analysis in medicine remains “use regression.”
Oh boy.
Which in turn reminds me of The Onion news piece ‘Multiple Stab Wounds May Be Harmful To Monkeys’. http://www.youtube.com/watch?v=cQ7J7UjsRqg