If we assume that approximately p% of results are false positives, and that only positive results are published, then the question becomes how many scientists are trying to prove (and disprove) the same hypothesis. If 1000 scientists are trying to prove that Drug Y slows the progression of Alzheimer’s disease, and a p of 0.01 is required for publication, then we need to see more than 10 independent publications supporting this result before we should believe it. Things would be so much easier if negative results were given as much weight as positive ones… Can anyone think of a good way of calibrating the publication bias towards positives?
If we assume that approximately p% of results are false positives, and that only positive results are published, then the question becomes how many scientists are trying to prove (and disprove) the same hypothesis. If 1000 scientists are trying to prove that Drug Y slows the progression of Alzheimer’s disease, and a p of 0.01 is required for publication, then we need to see more than 10 independent publications supporting this result before we should believe it. Things would be so much easier if negative results were given as much weight as positive ones… Can anyone think of a good way of calibrating the publication bias towards positives?
This is what they do in the wretched hive of scum and villainly that is medical research: http://www.cochrane-net.org/openlearning/HTML/mod15-3.htm