I don’t see a paradox. After 100 experiments one can conclude that either the confidence level was set too low, or the papers are all biased toward two-tailed coins. But which is it?
(1) is obvious, of course—in hindsight. However changing your confidence level after the observation is generally advised against. But (2) seems to be confusing Type I and Type II error rates.
On another level, I suppose it can be said that of course they are all biased! But, by the actual two-tailed coin rather than researchers’ prejudice against normal coins.
I don’t see a paradox. After 100 experiments one can conclude that either the confidence level was set too low, or the papers are all biased toward two-tailed coins. But which is it?
(1) is obvious, of course—in hindsight. However changing your confidence level after the observation is generally advised against. But (2) seems to be confusing Type I and Type II error rates.
On another level, I suppose it can be said that of course they are all biased! But, by the actual two-tailed coin rather than researchers’ prejudice against normal coins.