Honorton’s estimate required fourty-six unreported chance-level experiments for each of those in the meta-study, including those that themselves gave no significant support for the paranormal hypothesis.
Note that this is a bogus calculation: it says that if there was no publication bias, so that unpublished studies were just as likely to show positive results as published ones, then adding the stated number of chance studies would “dilute” the results below a threshold significance level. But of course the whole point of publication bias is the enrichment of the file-drawer with negative results. See this paper by Scargle. You need far fewer studies in the file-drawer given the presence of bias. Further, various positive biases will be focused in the published literature, e.g. people doing outright fraud will normally do it for an audience.
The number of studies needed also collapses if various questionable research practices (optional stopping, post hoc reporting of subgroups as separate experiments, etc) are used to concentrate ‘hits’ into some experiments while misses can be concentrated in a small file drawer.
Parapsychologists counter that the few attempts to audit for unpublished studies (which would not catch everything) have not found large skew in the unpublished studies, but these inflated “fail-safe” statistics are misleadingly large regardless.
Note that this is a bogus calculation: it says that if there was no publication bias, so that unpublished studies were just as likely to show positive results as published ones, then adding the stated number of chance studies would “dilute” the results below a threshold significance level. But of course the whole point of publication bias is the enrichment of the file-drawer with negative results. See this paper by Scargle. You need far fewer studies in the file-drawer given the presence of bias. Further, various positive biases will be focused in the published literature, e.g. people doing outright fraud will normally do it for an audience.
The number of studies needed also collapses if various questionable research practices (optional stopping, post hoc reporting of subgroups as separate experiments, etc) are used to concentrate ‘hits’ into some experiments while misses can be concentrated in a small file drawer.
Parapsychologists counter that the few attempts to audit for unpublished studies (which would not catch everything) have not found large skew in the unpublished studies, but these inflated “fail-safe” statistics are misleadingly large regardless.