Someone just told me that the solution to conflicting experiments is more experiments. Taken literally this is wrong: more experiments just means more conflict. What we need are fewer experiments. We need to get rid of the bad experiments.
Why expect that future experiments will be better? Maybe if the experimenters read the past experiments, they could learn from them. Well, maybe, but maybe if you read the experiments today, you could figure out which ones are bad today. If you don’t read the experiments today and don’t bother to judge which ones are better, what incentive is there for future experimenters to make better experiments, rather than accumulating conflict?
Alternatively: there are no conflicting experiments—there are simply experiments that measure different things.
The hard part is working out what the experiments were actually measuring, as opposed to what they were claimed to be measuring. In some cases the published results may be simply ‘measuring’ the creativity of the writers in inventing data. More honest experimenters may still measure things that they did not intend, or may generalize too far in interpreting the results.
Further experiments do very often help in all these situations.
The hard part is being willing to call papers bad. The task I find difficult is getting people to acknowledge that I called them bad, rather than gaslighting me.
I’m a fan of there being many experiments, but I might be biased by my background in meta-analysis. Many good experiments are, of course, better than many poorly designed and/or executed experiments, but replication is important, even in good experiments. Even carefully controlled experiments have the potential of error. Also, having many experiments usually is a better test of the generalizability of the findings. Finally, having many experiments coming out of many different laboratories (independent of each other) increases confidence that the findings are not the result of the investigator’s preference for what the results should be. If there is conflict in findings it might be poor study design and/or execution or it might be that the field is missing something important about the truth.
Reasonably we need both, but most of all we need some way to figure out what happened in the situation where we have conflicting experiments, so as to be able to say “these results are invalid because XXX”.
Probably more of an adversarial process, where experiments and their results must be replicated*. Which means experiments must be documented way more detailed, and also data has to be much more clear and especially the steps that happen in clean up etc.
Personally I think science is in crisis, people are incentivized to write lots of papers, publish results fast, and there is zero incentive to show a paper is false / bad, or replicate an experiment.
*If possible, redoing some experiment is going to be very hard, especially if we would like the experiments to have as little in common as possible (building another collider to does what LHC does is not happening any time soon).
Someone just told me that the solution to conflicting experiments is more experiments. Taken literally this is wrong: more experiments just means more conflict. What we need are fewer experiments. We need to get rid of the bad experiments.
Why expect that future experiments will be better? Maybe if the experimenters read the past experiments, they could learn from them. Well, maybe, but maybe if you read the experiments today, you could figure out which ones are bad today. If you don’t read the experiments today and don’t bother to judge which ones are better, what incentive is there for future experimenters to make better experiments, rather than accumulating conflict?
Alternatively: there are no conflicting experiments—there are simply experiments that measure different things.
The hard part is working out what the experiments were actually measuring, as opposed to what they were claimed to be measuring. In some cases the published results may be simply ‘measuring’ the creativity of the writers in inventing data. More honest experimenters may still measure things that they did not intend, or may generalize too far in interpreting the results.
Further experiments do very often help in all these situations.
The hard part is being willing to call papers bad. The task I find difficult is getting people to acknowledge that I called them bad, rather than gaslighting me.
I’m a fan of there being many experiments, but I might be biased by my background in meta-analysis. Many good experiments are, of course, better than many poorly designed and/or executed experiments, but replication is important, even in good experiments. Even carefully controlled experiments have the potential of error. Also, having many experiments usually is a better test of the generalizability of the findings. Finally, having many experiments coming out of many different laboratories (independent of each other) increases confidence that the findings are not the result of the investigator’s preference for what the results should be. If there is conflict in findings it might be poor study design and/or execution or it might be that the field is missing something important about the truth.
Reasonably we need both, but most of all we need some way to figure out what happened in the situation where we have conflicting experiments, so as to be able to say “these results are invalid because XXX”.
Probably more of an adversarial process, where experiments and their results must be replicated*. Which means experiments must be documented way more detailed, and also data has to be much more clear and especially the steps that happen in clean up etc.
Personally I think science is in crisis, people are incentivized to write lots of papers, publish results fast, and there is zero incentive to show a paper is false / bad, or replicate an experiment.
*If possible, redoing some experiment is going to be very hard, especially if we would like the experiments to have as little in common as possible (building another collider to does what LHC does is not happening any time soon).