Another entry from the ‘no one understands p-values’ files; “Policy: Twenty tips for interpreting scientific claims”, Sutherland et al 2013, Nature—there’s a lot to like in this article, and it’s definitely worth remembering most of the 20 tips, except for the one on p-values:
Significance is significant. Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20).
Whups. p=0.01 does not mean our subjective probability that the effect is zero is now just 1%, and there’s a 99% chance the effect is non-zero.
(The Bayesian probability could be very small or very large depending on how you set it up; if your prior is small, then data with p=0.01 will not shift your probability very much, for exactly the reason Sutherland et al 2013 explains in their section on base rates!)
Another entry from the ‘no one understands p-values’ files; “Policy: Twenty tips for interpreting scientific claims”, Sutherland et al 2013, Nature—there’s a lot to like in this article, and it’s definitely worth remembering most of the 20 tips, except for the one on p-values:
Whups. p=0.01 does not mean our subjective probability that the effect is zero is now just 1%, and there’s a 99% chance the effect is non-zero.
(The Bayesian probability could be very small or very large depending on how you set it up; if your prior is small, then data with p=0.01 will not shift your probability very much, for exactly the reason Sutherland et al 2013 explains in their section on base rates!)