If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.
If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.