However, it is not clear how to obtain a ground truth with which to judge the correctness of the results.
That assumes we don’t have any criteria on which to judge good versus bad scientific papers.
You could train your model to predict the amount of citations that a paper will get.
You can also look at variables such as reproduced papers or withdrawn papers.
Define a utility function that collapses such variables into a single one.
Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.
You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.
Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.
Something along those lines might be done, but an interventional experiment (creating journals just to test a hypothesis about refereeing) would be impractical. That leaves observational data-collecting, where one might compare the differing practices of existing journals. But the confounding problems would be substantial.
Or, more promisingly, you could do an experiment with papers that are already published and have a citation record, and have experimental groups of referees assess them, and test different methods of resolving disagreements. That might actually be worth doing, although it has the flaw that it would only be assessing accepted papers and not the full range of submissions.
That assumes we don’t have any criteria on which to judge good versus bad scientific papers.
You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.
Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.
Something along those lines might be done, but an interventional experiment (creating journals just to test a hypothesis about refereeing) would be impractical. That leaves observational data-collecting, where one might compare the differing practices of existing journals. But the confounding problems would be substantial.
Or, more promisingly, you could do an experiment with papers that are already published and have a citation record, and have experimental groups of referees assess them, and test different methods of resolving disagreements. That might actually be worth doing, although it has the flaw that it would only be assessing accepted papers and not the full range of submissions.
Then no reason why you can’t test different procedures in an existing journal.