My tentative theory is that there’s a lot of knowledge that’s less formal than science in engineering, manufacturing, and the practice of medicine which makes it possible to get work done, and some fairly effective methods of filtering information that comes from science.
Honestly? Not really.
There are some filters, true. If you work in biochem, for example, you will end up trying to follow someone’s instructions in a lab setting, and you eventually learn better than to read certain open access publishers and that certain numbers are red flags. I’ve personally tried to use someone’s image processing code from a research paper, and found that it wouldn’t even compile in the version of MatLab they claimed to use, and wouldn’t have worked even if it did compile. That ended up being a simple and correctable error, but there are far less forgivable horror stories in the machine learning field. The Schon scandal was discovered when folk tried to replicate things in the lab, and it just didn’t work. There have been some efforts to computationally model natural data sets versus fraudulent ones, although I’ve not yet seen any serious implementation.
There are some structural issues that make the fields of experimental psychology and experimental sociology a little more prone to unreliability and fraud, but they’re not as unique to those fields as we’d like. The biggest thing is the time cost of replicating experiments and natural variance in test subjects, and that’s an issue in practical medicinal science and even engineering, too.
We didn’t find out about Stapel’s fraud for years, even though most of his works required no greater research tools than bored undergrads. Vioxx had a test case of eighty million people, which it didn’t really need given that the heart disease risks showed up in vastly smaller groups. It took a decade and a half for the fraud in the Wakefield study to finally be retracted. And it’s impossible to measure the degree of fraud or error we don’t catch.
The practical difference is that we’ve gotten the larger tests cases, often at the cost of significant amounts of sweat and blood. Sometimes for much lesser sins than fraud or irreproducibility. Where we didn’t think through every possible mechanical flaw or didn’t test the materials completely enough, bridges collapsed with people on them. Where a simple math error occurs, over a hundred million USD goes exactly the wrong way. And we learn.
The only meaningful filter is reality. Peer review will forgive a great many errors. Physics does not.
Honestly? Not really.
There are some filters, true. If you work in biochem, for example, you will end up trying to follow someone’s instructions in a lab setting, and you eventually learn better than to read certain open access publishers and that certain numbers are red flags. I’ve personally tried to use someone’s image processing code from a research paper, and found that it wouldn’t even compile in the version of MatLab they claimed to use, and wouldn’t have worked even if it did compile. That ended up being a simple and correctable error, but there are far less forgivable horror stories in the machine learning field. The Schon scandal was discovered when folk tried to replicate things in the lab, and it just didn’t work. There have been some efforts to computationally model natural data sets versus fraudulent ones, although I’ve not yet seen any serious implementation.
There are some structural issues that make the fields of experimental psychology and experimental sociology a little more prone to unreliability and fraud, but they’re not as unique to those fields as we’d like. The biggest thing is the time cost of replicating experiments and natural variance in test subjects, and that’s an issue in practical medicinal science and even engineering, too.
We didn’t find out about Stapel’s fraud for years, even though most of his works required no greater research tools than bored undergrads. Vioxx had a test case of eighty million people, which it didn’t really need given that the heart disease risks showed up in vastly smaller groups. It took a decade and a half for the fraud in the Wakefield study to finally be retracted. And it’s impossible to measure the degree of fraud or error we don’t catch.
The practical difference is that we’ve gotten the larger tests cases, often at the cost of significant amounts of sweat and blood. Sometimes for much lesser sins than fraud or irreproducibility. Where we didn’t think through every possible mechanical flaw or didn’t test the materials completely enough, bridges collapsed with people on them. Where a simple math error occurs, over a hundred million USD goes exactly the wrong way. And we learn.
The only meaningful filter is reality. Peer review will forgive a great many errors. Physics does not.