No, because without knowing how likely people are to die of colon cancer without eating chocolate, I would have no idea if that contradicted or confirmed my own experience.
Which suggests to me that rather than being more reliable on average than one’s own experience, the average paper is, in fact, talking about things that are outside the normal person’s day to day experience. But in those rare cases when a single paper contradicts something I’ve seen myself, then I would have no problem at all in saying it’s wrong.
It seems that we are using the phrase “one’s own experience” in a different way. If I knew 100 people, 20 of whom ate much more chocolate than the rest, and out of those 20 noone had colon cancer, while five of the rest had one, I would say that my personal experience tells that chocolate consumption is anticorrelated with colon cancer. While you use “one’s own experience” only to denote things which are really obvious.
The problem is that most people are far less cautious when creating hypotheses from own experience than you probably are. I have heard lots of statements roughly analogous to “although my doctor says otherwise, chocolate in fact cures common cold; normally it takes a weak to get rid of it, but last year I ate a lot of chocolate and was healthy in six days”. Which is what the original article tries to warn against.
“While you use “one’s own experience” only to denote things which are really obvious.”
No, I use it to denote things I have experienced.
For example, there is disagreement over whether vitamin C megadoses can help certain kinds of cancers. I’ve actually seen papers on both sides. However, had I only seen a single paper that said vitamin C doesn’t help with cancer, I would have perfectly good grounds for dismissing it—because I have seen two people gain a significant number of QALYs from taking vitamin C when diagnosed with terminal, fast-acting, painful cancers.
That’s not a ‘really obvious’ statement—it’s very far from an obvious statement—but “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” is stronger evidence than a single unreplicated paper.
Is “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” stronger evidence for vitamin C effectivity than a reviewed paper saying “we have conducted a study on 1000 patients with terminal cancer; the survival rate in the group treated with large doses of vitamin C was not greater than the survival rate in the control group”? If so, why?
It would depend on the methodology used. I have seen enough examples of horribly bad—not to say utterly fraudulent—science in medical journals that I would actually take publication in a medical journal of a single, unreplicated, study as being slight evidence against the conclusion it comes to.
(As an example, the Mayo clinic published a study with precisely those results, claiming to be ‘unable to replicate’ a previous experiment, in the early 80s. Except that where the experiment they were trying to ‘replicate’ had used intravenous doses, they used oral ones. And used a lower dose. And spaced the doses differently. And ended the trial after a much shorter period.)
So my immediate conclusion would be “No they didn’t” if I saw that result from a single paper. Because when you’ve seen people in agony, dying, and you’ve seen them walking around and healthy a couple of months later, and you see that happen repeatably, then that is very strong evidence. And something you can test yourself is always better than taking someone else’s word for it.
However, if that study were replicated, independently, and had no obviously cretinous methodological flaws upon inspection, then it would be strong evidence. But if something I don’t directly observe myself contradicts my own observations, then I will always put my own observations ahead of those of someone else.
No, because without knowing how likely people are to die of colon cancer without eating chocolate, I would have no idea if that contradicted or confirmed my own experience. Which suggests to me that rather than being more reliable on average than one’s own experience, the average paper is, in fact, talking about things that are outside the normal person’s day to day experience. But in those rare cases when a single paper contradicts something I’ve seen myself, then I would have no problem at all in saying it’s wrong.
It seems that we are using the phrase “one’s own experience” in a different way. If I knew 100 people, 20 of whom ate much more chocolate than the rest, and out of those 20 noone had colon cancer, while five of the rest had one, I would say that my personal experience tells that chocolate consumption is anticorrelated with colon cancer. While you use “one’s own experience” only to denote things which are really obvious.
The problem is that most people are far less cautious when creating hypotheses from own experience than you probably are. I have heard lots of statements roughly analogous to “although my doctor says otherwise, chocolate in fact cures common cold; normally it takes a weak to get rid of it, but last year I ate a lot of chocolate and was healthy in six days”. Which is what the original article tries to warn against.
“While you use “one’s own experience” only to denote things which are really obvious.”
No, I use it to denote things I have experienced. For example, there is disagreement over whether vitamin C megadoses can help certain kinds of cancers. I’ve actually seen papers on both sides. However, had I only seen a single paper that said vitamin C doesn’t help with cancer, I would have perfectly good grounds for dismissing it—because I have seen two people gain a significant number of QALYs from taking vitamin C when diagnosed with terminal, fast-acting, painful cancers. That’s not a ‘really obvious’ statement—it’s very far from an obvious statement—but “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” is stronger evidence than a single unreplicated paper.
Is “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” stronger evidence for vitamin C effectivity than a reviewed paper saying “we have conducted a study on 1000 patients with terminal cancer; the survival rate in the group treated with large doses of vitamin C was not greater than the survival rate in the control group”? If so, why?
It would depend on the methodology used. I have seen enough examples of horribly bad—not to say utterly fraudulent—science in medical journals that I would actually take publication in a medical journal of a single, unreplicated, study as being slight evidence against the conclusion it comes to. (As an example, the Mayo clinic published a study with precisely those results, claiming to be ‘unable to replicate’ a previous experiment, in the early 80s. Except that where the experiment they were trying to ‘replicate’ had used intravenous doses, they used oral ones. And used a lower dose. And spaced the doses differently. And ended the trial after a much shorter period.)
So my immediate conclusion would be “No they didn’t” if I saw that result from a single paper. Because when you’ve seen people in agony, dying, and you’ve seen them walking around and healthy a couple of months later, and you see that happen repeatably, then that is very strong evidence. And something you can test yourself is always better than taking someone else’s word for it.
However, if that study were replicated, independently, and had no obviously cretinous methodological flaws upon inspection, then it would be strong evidence. But if something I don’t directly observe myself contradicts my own observations, then I will always put my own observations ahead of those of someone else.