I don’t think that paper allows any such estimate because it’s based on published results, which are highly biased toward “significant” findings. It’s why, for example, in psychology meta-analyses have effect sizes 3x larger than those of registered replications. For an estimate of the replicability of a field you need something like the Many Labs project (~54% replication, median effect size 1⁄4 of the original study).
Just glancing at that Many Labs paper, it’s looking specifically at psych studies replicable through a web browser. Who knows to what extent that generalizes to psych studies more broadly, or to biomedical research?
I don’t think that paper allows any such estimate because it’s based on published results, which are highly biased toward “significant” findings.
So it sounds like you’re worried that a bunch of failed replication attempts got put in the file drawer, even after there was a published significant finding for the replication attempt to be pushing back against?
I think the OSC’s reproducibility project is much more of what you’re looking for, if you’re worried that Many Labs is selecting only for a specific type of effect.
They focus on selecting studies quasi-randomly and use a variety of reproducibility measures (confidence interval, p-value, effect size magnitude + direction, subjective assessment). They find that around 30-50% of effects replicate, depending on the criteria used. They looked at 100 studies, in total.
I don’t know enough about the biomedical field, but a brief search on the web yields the following links, which might be useful?
I don’t think that paper allows any such estimate because it’s based on published results, which are highly biased toward “significant” findings. It’s why, for example, in psychology meta-analyses have effect sizes 3x larger than those of registered replications. For an estimate of the replicability of a field you need something like the Many Labs project (~54% replication, median effect size 1⁄4 of the original study).
Just glancing at that Many Labs paper, it’s looking specifically at psych studies replicable through a web browser. Who knows to what extent that generalizes to psych studies more broadly, or to biomedical research?
So it sounds like you’re worried that a bunch of failed replication attempts got put in the file drawer, even after there was a published significant finding for the replication attempt to be pushing back against?
I think the OSC’s reproducibility project is much more of what you’re looking for, if you’re worried that Many Labs is selecting only for a specific type of effect.
They focus on selecting studies quasi-randomly and use a variety of reproducibility measures (confidence interval, p-value, effect size magnitude + direction, subjective assessment). They find that around 30-50% of effects replicate, depending on the criteria used. They looked at 100 studies, in total.
I don’t know enough about the biomedical field, but a brief search on the web yields the following links, which might be useful?
Science Forum: The Brazilian Reproducibility Initiative which aims to reproduce 60-100 Brazilian studies, results due in 2021.
Section 2 of this symposium report from 2015 which collects some studies (including the OSC one I list above)
This page references some studies from around early 2010-2011 which find base rates of ~10% for replicating oncology-related stuff.