There’s all sorts of things one has to control for, e.g. parent’s age, that may inflate the error bars (if the error in imperfectly controlling for a co-founder is accounted for), putting zero within the error bars. Without looking at all the studies one can’t really tell.
Some studies ought to also have a chance of making a superfluous finding that ‘vaccines prevent autism’, but apparently that was not observed either.
There’s all sorts of things one has to control for, e.g. parent’s age, that may inflate the error bars (if the error in imperfectly controlling for a co-founder is accounted for), putting zero within the error bars. Without looking at all the studies one can’t really tell.
What does that have to do with whether the researchers followed the nigh-universal practice of setting alpha to 0.05?
Example: I am measuring radioactivity with a Geiger counter. I have statistical error (with the 95% confidence interval), but I also have systematic error (e.g. the Geiger counter’s sensitivity is ‘guaranteed’ to be within 5% of a specified value). If I am reporting an unusual finding, I’d want the result not to be explainable by the sum of statistical error and the bound on the systematic error. Bottom line is, generally there’s no guarantee that “95% confidence” findings will go the other way 5% of the time. It is perfectly OK to do something that may inadvertently boost the confidence.
There’s all sorts of things one has to control for, e.g. parent’s age, that may inflate the error bars (if the error in imperfectly controlling for a co-founder is accounted for), putting zero within the error bars. Without looking at all the studies one can’t really tell.
Some studies ought to also have a chance of making a superfluous finding that ‘vaccines prevent autism’, but apparently that was not observed either.
What does that have to do with whether the researchers followed the nigh-universal practice of setting alpha to 0.05?
Example: I am measuring radioactivity with a Geiger counter. I have statistical error (with the 95% confidence interval), but I also have systematic error (e.g. the Geiger counter’s sensitivity is ‘guaranteed’ to be within 5% of a specified value). If I am reporting an unusual finding, I’d want the result not to be explainable by the sum of statistical error and the bound on the systematic error. Bottom line is, generally there’s no guarantee that “95% confidence” findings will go the other way 5% of the time. It is perfectly OK to do something that may inadvertently boost the confidence.