Having read this paper in the past I’d encourage people to look into it.
It offers the case of stomach ulcer etiology. A study many decades ago came to the conclusion that bacteria were not the cause of ulcers (the study was reasonably thorough, it just missed some details) and that lead no one to do research in the area because the payoff of confirming a theory that was very likely to be right was so low.
This affected many many people. Ulcers caused by H. Pylori can generally be treated simply with antibiotics and some pepto for the symptoms, but for lack of this treatment many people suffered chronic ulcers for decades.
After the example, the paper develops a model for both the speed of scientific progress and the likelihood of a community settling on a wrong conclusion based on the social graph of the researchers. It shows that communities where everyone knows of everyone else’s research results converge more swiftly but are more likely to make group errors. By contrast, sparsely connected communities converge more slowly but are less likely to make substantive errors.
Part of the trick here (not really highlighted in the paper) is the way that hearing everyone’s latest results is selfishly beneficial for researchers who are rewarded for personally answering “the biggest open question in their field” whereas people whose position in a social graph of knowledge workers is more marginal are likely to be working on questions where the social utility relative to personal career utility is more pro-social than is usual.
Most marginal researchers will gain no significant benefits, of course, because they’ll simply confirm the answers that central researchers were already assuming based on a single study they heard about once. Romantically considered, these people are sort of the unsung heroes of science… the patent clerks who didn’t come up with a theory of relativity even if they were looking in plausible places. But the big surprises and big career boosts are likely to come from these sorts of researchers, not from the mainstream. Very dramatic :-)
Note, however, that this is not necessarily a reason to pat yourself on the back for being scientifically isolated. The total utility (social + personal) of marginal work may still be substantially lower than mainstream pursuit of the “lowest hanging open question based on all known evidence”.
I think the real social coordination question is more about trying to calculate the value of information for various possible experiments and then socially optimize this by having people work on the biggest question for which they are more suited than any other available researcher. Right now I think science tends to have boom and bust cycles where many research teams all jump on the biggest lowest hanging open question and the first to publish ends up in Science or Nature and the slower teams end up publishing in lesser journals (and in some sense their work may be considered retrospectively wasteful). We can hope that this flurry of research reached the right answer, because the researchers in the field are likely to consider further replication work to be a waste of their grant dollars.
Having read this paper in the past I’d encourage people to look into it.
It offers the case of stomach ulcer etiology. A study many decades ago came to the conclusion that bacteria were not the cause of ulcers (the study was reasonably thorough, it just missed some details) and that lead no one to do research in the area because the payoff of confirming a theory that was very likely to be right was so low.
This affected many many people. Ulcers caused by H. Pylori can generally be treated simply with antibiotics and some pepto for the symptoms, but for lack of this treatment many people suffered chronic ulcers for decades.
After the example, the paper develops a model for both the speed of scientific progress and the likelihood of a community settling on a wrong conclusion based on the social graph of the researchers. It shows that communities where everyone knows of everyone else’s research results converge more swiftly but are more likely to make group errors. By contrast, sparsely connected communities converge more slowly but are less likely to make substantive errors.
Part of the trick here (not really highlighted in the paper) is the way that hearing everyone’s latest results is selfishly beneficial for researchers who are rewarded for personally answering “the biggest open question in their field” whereas people whose position in a social graph of knowledge workers is more marginal are likely to be working on questions where the social utility relative to personal career utility is more pro-social than is usual.
Most marginal researchers will gain no significant benefits, of course, because they’ll simply confirm the answers that central researchers were already assuming based on a single study they heard about once. Romantically considered, these people are sort of the unsung heroes of science… the patent clerks who didn’t come up with a theory of relativity even if they were looking in plausible places. But the big surprises and big career boosts are likely to come from these sorts of researchers, not from the mainstream. Very dramatic :-)
Note, however, that this is not necessarily a reason to pat yourself on the back for being scientifically isolated. The total utility (social + personal) of marginal work may still be substantially lower than mainstream pursuit of the “lowest hanging open question based on all known evidence”.
I think the real social coordination question is more about trying to calculate the value of information for various possible experiments and then socially optimize this by having people work on the biggest question for which they are more suited than any other available researcher. Right now I think science tends to have boom and bust cycles where many research teams all jump on the biggest lowest hanging open question and the first to publish ends up in Science or Nature and the slower teams end up publishing in lesser journals (and in some sense their work may be considered retrospectively wasteful). We can hope that this flurry of research reached the right answer, because the researchers in the field are likely to consider further replication work to be a waste of their grant dollars.