If you dismiss ideas coming from outside academia as non-scientific, you have a point. Those ideas were not properly tested, peer-reviewed, etc.
But if you dismiss those ideas as not worth scientists’ attention, you are making a much stronger statement. You are effectively making a positive statement that the probability of those ideas being correct is smaller than 5%. You may be right, or you may be wrong, but it would be nice to provide some hint about why you think so. Are you just dissing the author; or do we have an actual historical experience that among ideas coming from a certain reference group, less than 1 in 20 turns out to be correct?
Why 5%? Let’s do the math. Suppose that we have a set of hypotheses out of which about 5% are true. We test them, using a p=0.05 threshold for publishing. That means, out of 10000 hypotheses, about 500 are true, and let’s assume that all of them get published; and about 9500 are false, and about 475 of them get published. This would result in approximately 50% failure in replication… which seems to be business as usual in certain academic journals?
So if it is perfectly okay for scientists to explore ideas that have about 5% chance of being correct, then by saying that certain idea should not be explored scientifically, you seem to suggest that its probability is much smaller.
Note that this is different from expecting an idea to turn out to be correct. If an idea has a 10% chance of being correct, it means that I expect it to be wrong, and yet it makes sense to explore the idea seriously.
(This is a condensed version of an argument I made in a week old thread, so I wanted to give it a little more visibility. On top of that, I suspect that status concerns can make a great difference in scientist’s incentives. Exploring ideas originating in academia that have a 5% chance of being right is… business as usual. Exploring ideas originating outside of academia that have a 5% chance of being right will make you look incompetent if they turn out to be wrong, which indeed is the likely outcome. No one ever got fired for writing a thesis on IBM, so to speak.)
If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.
It gets very complicated when you add in incentives and recognize that science and scientists are also businesses. There’s a LOT of the world that scientists haven’t (or haven’t in the last century or so) really tried to prove, replicate, and come to consensus on.
AlphaFold doesn’t come out of academia. That doesn’t make it non-scientific. As Feymann said in his cargo-cult science speech, plenty of academic work is not properly tested. Being peer-reviewed doesn’t make something scientific.
Conceptually, I think you are making a mistake when you treat ideas and experiments as the same and equate the probability of an experiment finding a result as the same as the idea being true. Finding a good experiment to do to test an idea is nontrivial.
A friend of mine was working in a psychology lab and according to my friend the professor leading the lab was mostly trying to p-hack her way into publishing results.
Another friend, spoke approvingly of the work of the same professor because the professor managed to get Buddhist ideas into academic psychology and now the official scientific definition of the term resembles certain Buddhist notions.
The professor has a well-respected research career in her field.
I think its important to disambiguate searching for new problems and searching for new results.
For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new concrete information, information outside of academia is not worth scientists interests due to lack of trust / reputation
When it comes to searching for new hypotheses / problems, an important criterion is how much you personally believe in your direction. You never practically pursue ideas with 10% probability: you ideally pursue ideas you think have a fifty percent probability but your peers believe have a 15% probability. (This assumes you have high risk tolerance like I do, and are okay with a lot of failure. Otherwise, do incremental research). For problem generation, varied sources of information are useful, but the belief must come intrinsically.
When searching for interesting results to verify and replicate, its open season.
As a result, I think that ideas outside academia are not useful to researchers unless the researchers in question have a comparative advantage at synthesizing those ideas into good research inspiration.
As for nonideal reasons for ignoring results outside academia, I would more blame reviewers rather than vague “status concerns” and a general low appetite for risk tolerance despite working in an inherently risky profession of research.
Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.
There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.
If you dismiss ideas coming from outside academia as non-scientific, you have a point. Those ideas were not properly tested, peer-reviewed, etc.
But if you dismiss those ideas as not worth scientists’ attention, you are making a much stronger statement. You are effectively making a positive statement that the probability of those ideas being correct is smaller than 5%. You may be right, or you may be wrong, but it would be nice to provide some hint about why you think so. Are you just dissing the author; or do we have an actual historical experience that among ideas coming from a certain reference group, less than 1 in 20 turns out to be correct?
Why 5%? Let’s do the math. Suppose that we have a set of hypotheses out of which about 5% are true. We test them, using a p=0.05 threshold for publishing. That means, out of 10000 hypotheses, about 500 are true, and let’s assume that all of them get published; and about 9500 are false, and about 475 of them get published. This would result in approximately 50% failure in replication… which seems to be business as usual in certain academic journals?
So if it is perfectly okay for scientists to explore ideas that have about 5% chance of being correct, then by saying that certain idea should not be explored scientifically, you seem to suggest that its probability is much smaller.
Note that this is different from expecting an idea to turn out to be correct. If an idea has a 10% chance of being correct, it means that I expect it to be wrong, and yet it makes sense to explore the idea seriously.
(This is a condensed version of an argument I made in a week old thread, so I wanted to give it a little more visibility. On top of that, I suspect that status concerns can make a great difference in scientist’s incentives. Exploring ideas originating in academia that have a 5% chance of being right is… business as usual. Exploring ideas originating outside of academia that have a 5% chance of being right will make you look incompetent if they turn out to be wrong, which indeed is the likely outcome. No one ever got fired for writing a thesis on IBM, so to speak.)
If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.
It gets very complicated when you add in incentives and recognize that science and scientists are also businesses. There’s a LOT of the world that scientists haven’t (or haven’t in the last century or so) really tried to prove, replicate, and come to consensus on.
AlphaFold doesn’t come out of academia. That doesn’t make it non-scientific. As Feymann said in his cargo-cult science speech, plenty of academic work is not properly tested. Being peer-reviewed doesn’t make something scientific.
Conceptually, I think you are making a mistake when you treat ideas and experiments as the same and equate the probability of an experiment finding a result as the same as the idea being true. Finding a good experiment to do to test an idea is nontrivial.
A friend of mine was working in a psychology lab and according to my friend the professor leading the lab was mostly trying to p-hack her way into publishing results.
Another friend, spoke approvingly of the work of the same professor because the professor managed to get Buddhist ideas into academic psychology and now the official scientific definition of the term resembles certain Buddhist notions.
The professor has a well-respected research career in her field.
I think its important to disambiguate searching for new problems and searching for new results.
For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new concrete information, information outside of academia is not worth scientists interests due to lack of trust / reputation
When it comes to searching for new hypotheses / problems, an important criterion is how much you personally believe in your direction. You never practically pursue ideas with 10% probability: you ideally pursue ideas you think have a fifty percent probability but your peers believe have a 15% probability. (This assumes you have high risk tolerance like I do, and are okay with a lot of failure. Otherwise, do incremental research). For problem generation, varied sources of information are useful, but the belief must come intrinsically.
When searching for interesting results to verify and replicate, its open season.
As a result, I think that ideas outside academia are not useful to researchers unless the researchers in question have a comparative advantage at synthesizing those ideas into good research inspiration.
As for nonideal reasons for ignoring results outside academia, I would more blame reviewers rather than vague “status concerns” and a general low appetite for risk tolerance despite working in an inherently risky profession of research.
Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.
There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.