The word “replicated” seems to have disappeared from this paraphrasing, and that flips the paraphrased statement from true to false.
On the other hand the paraphrase also changed ‘far more’ to ‘more’ so technically it scrapes through. Even though the peer review process is only slightly better than chance it does add some value.
A single peer-reviewed paper is still not more evidence than one’s own experiences. If a paper were published tomorrow, no matter how well peer-reviewed, saying, say, “chocolate is immediately lethal to humans” I would have ample reason to dismiss that, as I have seen many examples of people eating chocolate and not immediately dying. Were that paper replicated many times over, however, I’d have to start wondering about what was causing the discrepancy. But with one paper? Defy the data.
If the claim of the paper is so much extraordinary as your example, then it is very probably bullshit. But on average papers tend to be more reliable than one’s own experiences. After all, there aren’t many peer-reviewed papers about immediate lethality of chocolate out there, so the analogy is somewhat stretched. If a single paper claimed that chocolate increases the risk of dying from colon cancer by 5%, but all chocolate lovers you personally knew were absolutely healthy, would you also defy the data?
No, because without knowing how likely people are to die of colon cancer without eating chocolate, I would have no idea if that contradicted or confirmed my own experience.
Which suggests to me that rather than being more reliable on average than one’s own experience, the average paper is, in fact, talking about things that are outside the normal person’s day to day experience. But in those rare cases when a single paper contradicts something I’ve seen myself, then I would have no problem at all in saying it’s wrong.
It seems that we are using the phrase “one’s own experience” in a different way. If I knew 100 people, 20 of whom ate much more chocolate than the rest, and out of those 20 noone had colon cancer, while five of the rest had one, I would say that my personal experience tells that chocolate consumption is anticorrelated with colon cancer. While you use “one’s own experience” only to denote things which are really obvious.
The problem is that most people are far less cautious when creating hypotheses from own experience than you probably are. I have heard lots of statements roughly analogous to “although my doctor says otherwise, chocolate in fact cures common cold; normally it takes a weak to get rid of it, but last year I ate a lot of chocolate and was healthy in six days”. Which is what the original article tries to warn against.
“While you use “one’s own experience” only to denote things which are really obvious.”
No, I use it to denote things I have experienced.
For example, there is disagreement over whether vitamin C megadoses can help certain kinds of cancers. I’ve actually seen papers on both sides. However, had I only seen a single paper that said vitamin C doesn’t help with cancer, I would have perfectly good grounds for dismissing it—because I have seen two people gain a significant number of QALYs from taking vitamin C when diagnosed with terminal, fast-acting, painful cancers.
That’s not a ‘really obvious’ statement—it’s very far from an obvious statement—but “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” is stronger evidence than a single unreplicated paper.
Is “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” stronger evidence for vitamin C effectivity than a reviewed paper saying “we have conducted a study on 1000 patients with terminal cancer; the survival rate in the group treated with large doses of vitamin C was not greater than the survival rate in the control group”? If so, why?
It would depend on the methodology used. I have seen enough examples of horribly bad—not to say utterly fraudulent—science in medical journals that I would actually take publication in a medical journal of a single, unreplicated, study as being slight evidence against the conclusion it comes to.
(As an example, the Mayo clinic published a study with precisely those results, claiming to be ‘unable to replicate’ a previous experiment, in the early 80s. Except that where the experiment they were trying to ‘replicate’ had used intravenous doses, they used oral ones. And used a lower dose. And spaced the doses differently. And ended the trial after a much shorter period.)
So my immediate conclusion would be “No they didn’t” if I saw that result from a single paper. Because when you’ve seen people in agony, dying, and you’ve seen them walking around and healthy a couple of months later, and you see that happen repeatably, then that is very strong evidence. And something you can test yourself is always better than taking someone else’s word for it.
However, if that study were replicated, independently, and had no obviously cretinous methodological flaws upon inspection, then it would be strong evidence. But if something I don’t directly observe myself contradicts my own observations, then I will always put my own observations ahead of those of someone else.
On the other hand the paraphrase also changed ‘far more’ to ‘more’ so technically it scrapes through. Even though the peer review process is only slightly better than chance it does add some value.
A single peer-reviewed paper is still not more evidence than one’s own experiences. If a paper were published tomorrow, no matter how well peer-reviewed, saying, say, “chocolate is immediately lethal to humans” I would have ample reason to dismiss that, as I have seen many examples of people eating chocolate and not immediately dying. Were that paper replicated many times over, however, I’d have to start wondering about what was causing the discrepancy. But with one paper? Defy the data.
If the claim of the paper is so much extraordinary as your example, then it is very probably bullshit. But on average papers tend to be more reliable than one’s own experiences. After all, there aren’t many peer-reviewed papers about immediate lethality of chocolate out there, so the analogy is somewhat stretched. If a single paper claimed that chocolate increases the risk of dying from colon cancer by 5%, but all chocolate lovers you personally knew were absolutely healthy, would you also defy the data?
No, because without knowing how likely people are to die of colon cancer without eating chocolate, I would have no idea if that contradicted or confirmed my own experience. Which suggests to me that rather than being more reliable on average than one’s own experience, the average paper is, in fact, talking about things that are outside the normal person’s day to day experience. But in those rare cases when a single paper contradicts something I’ve seen myself, then I would have no problem at all in saying it’s wrong.
It seems that we are using the phrase “one’s own experience” in a different way. If I knew 100 people, 20 of whom ate much more chocolate than the rest, and out of those 20 noone had colon cancer, while five of the rest had one, I would say that my personal experience tells that chocolate consumption is anticorrelated with colon cancer. While you use “one’s own experience” only to denote things which are really obvious.
The problem is that most people are far less cautious when creating hypotheses from own experience than you probably are. I have heard lots of statements roughly analogous to “although my doctor says otherwise, chocolate in fact cures common cold; normally it takes a weak to get rid of it, but last year I ate a lot of chocolate and was healthy in six days”. Which is what the original article tries to warn against.
“While you use “one’s own experience” only to denote things which are really obvious.”
No, I use it to denote things I have experienced. For example, there is disagreement over whether vitamin C megadoses can help certain kinds of cancers. I’ve actually seen papers on both sides. However, had I only seen a single paper that said vitamin C doesn’t help with cancer, I would have perfectly good grounds for dismissing it—because I have seen two people gain a significant number of QALYs from taking vitamin C when diagnosed with terminal, fast-acting, painful cancers. That’s not a ‘really obvious’ statement—it’s very far from an obvious statement—but “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” is stronger evidence than a single unreplicated paper.
Is “my grandfather is still alive, in no pain and walking eight miles a day, when six months ago he was given two months to live” stronger evidence for vitamin C effectivity than a reviewed paper saying “we have conducted a study on 1000 patients with terminal cancer; the survival rate in the group treated with large doses of vitamin C was not greater than the survival rate in the control group”? If so, why?
It would depend on the methodology used. I have seen enough examples of horribly bad—not to say utterly fraudulent—science in medical journals that I would actually take publication in a medical journal of a single, unreplicated, study as being slight evidence against the conclusion it comes to. (As an example, the Mayo clinic published a study with precisely those results, claiming to be ‘unable to replicate’ a previous experiment, in the early 80s. Except that where the experiment they were trying to ‘replicate’ had used intravenous doses, they used oral ones. And used a lower dose. And spaced the doses differently. And ended the trial after a much shorter period.)
So my immediate conclusion would be “No they didn’t” if I saw that result from a single paper. Because when you’ve seen people in agony, dying, and you’ve seen them walking around and healthy a couple of months later, and you see that happen repeatably, then that is very strong evidence. And something you can test yourself is always better than taking someone else’s word for it.
However, if that study were replicated, independently, and had no obviously cretinous methodological flaws upon inspection, then it would be strong evidence. But if something I don’t directly observe myself contradicts my own observations, then I will always put my own observations ahead of those of someone else.