5 minutes one google didn’t turn up the study I’m thinking of, but I remember reading a study that claimed that around 2⁄3 of all published studies were false positives due to publication bias. (the p value they reported was sufficiently small to believe it).
I did, however, find a metastudy that studied publication bias on papers about publication bias.link.
They found “statistically insignificant” (p = 0.13) evidence for false positives there too.
The way I tend to deal with them now is treating them as weak evidence unless I’m interested enough to look further.
If the P value not really low, I’ll make guesses at how popular a topic of study it is (how many times can you try for a positive result?), how I heard about the study (more possibility for selection bias), how controversial the topic is (how strong is the urge to fudge something?), and what my prior probability would be.
For example, when someone tells me about a study that claims “X causes cancer” and 1) P = .04 2) it would somehow benefit the person if the claim were true 3) I see no prior reason for a link between X and cancer and 4) see possible other causes for the correlation that were not obviously corrected for, then I assign very little weight to this evidence.
If I find a study by googling the topic, P = 0.001, the topic isn’t all that controversial, and no one would even think to test it if they did not assign high prior probability, then I’ll file it under “known”.
5 minutes one google didn’t turn up the study I’m thinking of, but I remember reading a study that claimed that around 2⁄3 of all published studies were false positives due to publication bias. (the p value they reported was sufficiently small to believe it).
I did, however, find a metastudy that studied publication bias on papers about publication bias.link.
They found “statistically insignificant” (p = 0.13) evidence for false positives there too.
The way I tend to deal with them now is treating them as weak evidence unless I’m interested enough to look further.
If the P value not really low, I’ll make guesses at how popular a topic of study it is (how many times can you try for a positive result?), how I heard about the study (more possibility for selection bias), how controversial the topic is (how strong is the urge to fudge something?), and what my prior probability would be.
For example, when someone tells me about a study that claims “X causes cancer” and 1) P = .04 2) it would somehow benefit the person if the claim were true 3) I see no prior reason for a link between X and cancer and 4) see possible other causes for the correlation that were not obviously corrected for, then I assign very little weight to this evidence.
If I find a study by googling the topic, P = 0.001, the topic isn’t all that controversial, and no one would even think to test it if they did not assign high prior probability, then I’ll file it under “known”.
I think you are thinking of Ioannidis et al—Why Most Published Research Findings are False