Read literally, I’m not sure what to make of this caveat. First, isn’t it the number of subjects, rather than the number of studies, that’s relevant here? Perhaps they mean “average-size” studies? Second, I don’t know what they mean by “with an effect size <0.0.” Effect sizes that are lower than zero (as this refers to) are those that show a cognitive impairment. To give a precise statement on the number of (subjects or average-sized studies) needed to bring the total effect size to 0.2, we’d need to know a specific effect size that those studies would need to have. Mathematically, the statement just doesn’t make sense. I’ve read meta-analyses where the researchers at least try to find unpublished work, and it’s disappointing that the authors not only don’t do that here, but write about this issue in a way that seems to show a lack of care around the issue.
The article specifies that it used Orwin’s fail-safe N to calculate the number of missing studies required to reach a small effect size. It’s not as good as the standard trim-and-fill method I’ve seen a lot in meta-analyses. But it makes mathematical sense and provides evidence.
The article specifies that it used Orwin’s fail-safe N to calculate the number of missing studies required to reach a small effect size. It’s not as good as the standard trim-and-fill method I’ve seen a lot in meta-analyses. But it makes mathematical sense and provides evidence.
Ah, perhaps the “<” is a typo.