You seem to be under the impression that Einstein’s papers were not reviewed by professional physicists. That’s incorrect: They were reviewed by journal editors who were professional physicists.
But Einstein only needed one journal editor to decide that his paper was good stuff that would rock the boat, whereas under peer review, he would in practice need every peer reviewer to agree that his papers did not rock the boat.
Under the old system, he needed one of n to get published. Under the new system, it tends to be closer to n of n.
Consensus, as Galileo argued, produces bad science.
And, pretty obviously, we are getting bad science.
Recall the recent study reported in nature that only three of fifty results in cancer research were replicable.
The background to this replication study is that biomedical companies pick academic research to try to develop new medications—and they decided they needed to do some quality assurance.
But Einstein only needed one journal editor to decide that his paper was good stuff that would rock the boat, whereas under peer review, he would in practice need every peer reviewer to agree that his papers did not rock the boat.
The exact rules of peer review vary between different journals and conferences, but in general no single referee has veto power. If there is major disagreement between referees, they will discuss, and if they fail to form a consensus the journal editors / conference chairmen will step in and make the final decision, after possibly recruiting additional referees.
This seems to be a more accurate process than having a single editor making a decision based on only their own expertise.
Recall the recent study reported in nature that only three of fifty results in cancer research were replicable.
That’s a false positive problem, while you seemed to be arguing that peer review generated too many false negatives.
Anyway, neither referees nor editors try to replicate experimental results while reviewing a paper. That’s not the goal of the review process.
The review process is not intended to be a scientific “truth” certification. It is intended to ensure that a paper is innovative, clearly written, easy to place in the context of the research in its field, doesn’t contain glaring methodological errors and is described in sufficient detail to allow experimental replication.
Replication is something that is done by independent researchers after the paper is published.
But Einstein only needed one journal editor to decide that his paper was good stuff that would rock the boat, whereas under peer review, he would in practice need every peer reviewer to agree that his papers did not rock the boat.
Under the old system, he needed one of n to get published. Under the new system, it tends to be closer to n of n.
Consensus, as Galileo argued, produces bad science.
And, pretty obviously, we are getting bad science.
Recall the recent study reported in nature that only three of fifty results in cancer research were replicable.
The background to this replication study is that biomedical companies pick academic research to try to develop new medications—and they decided they needed to do some quality assurance.
The exact rules of peer review vary between different journals and conferences, but in general no single referee has veto power. If there is major disagreement between referees, they will discuss, and if they fail to form a consensus the journal editors / conference chairmen will step in and make the final decision, after possibly recruiting additional referees.
This seems to be a more accurate process than having a single editor making a decision based on only their own expertise.
That’s a false positive problem, while you seemed to be arguing that peer review generated too many false negatives.
Anyway, neither referees nor editors try to replicate experimental results while reviewing a paper. That’s not the goal of the review process.
The review process is not intended to be a scientific “truth” certification. It is intended to ensure that a paper is innovative, clearly written, easy to place in the context of the research in its field, doesn’t contain glaring methodological errors and is described in sufficient detail to allow experimental replication. Replication is something that is done by independent researchers after the paper is published.