First, as a physicist, I do have to point out that this article concerns mainly softer sciences, e.g. psychology, medicine, etc.
A summary of explanations for this effect:
“The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out.”
“Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found.”
“Richard Palmer… suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. … Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.”
“According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. … The current “obsession” with replicability distracts from the real problem, which is faulty design.”
These problems are with the proper usage of the scientific method, not the principle of the method itself. Certainly, it’s important to address them. I think the reason they appear so often in the softer sciences is that biological entities are enormously complex, and so higher-level ideas that make large generalizations are more susceptible to random error and statistical anomalies, as well as personal bias, conscious and unconscious.
The Decline Effect and the Scientific Method [link]
The Decline Effect and the Scientific Method (article @ the New Yorker)
First, as a physicist, I do have to point out that this article concerns mainly softer sciences, e.g. psychology, medicine, etc.
A summary of explanations for this effect:
“The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out.”
“Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found.”
“Richard Palmer… suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. … Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.”
“According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. … The current “obsession” with replicability distracts from the real problem, which is faulty design.”
These problems are with the proper usage of the scientific method, not the principle of the method itself. Certainly, it’s important to address them. I think the reason they appear so often in the softer sciences is that biological entities are enormously complex, and so higher-level ideas that make large generalizations are more susceptible to random error and statistical anomalies, as well as personal bias, conscious and unconscious.
For those who haven’t read it, take a look at Richard Feynman on cargo cult science if you want a good lecture on experimental design.