What steven0461 said. Square rooting both sides of the Bienaymé formula gives the standard deviation of the mean going as 1/√n. Taking precision as the reciprocal of that “standard error” then gives a √n dependence.
I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.
This is true, but we’re also often wrong, and for small-to-medium effects it’s often tough to say when we’re right and when we’re wrong without a technique that severs all possible links between confounders and outcome.
Where is the math for this?
I agree that methodology is important, but humans can often be good at inferring causality even without randomized controlled trials.
Edit: more thoughts on why I don’t think the Bienaymé formula is too relevant here; see also.
http://en.wikipedia.org/wiki/Variance#Sum_of_uncorrelated_variables_.28Bienaym.C3.A9_formula.29
(Of course, any systematic bias stays the same no matter how big you make the sample.)
What steven0461 said. Square rooting both sides of the Bienaymé formula gives the standard deviation of the mean going as 1/√n. Taking precision as the reciprocal of that “standard error” then gives a √n dependence.
This is true, but we’re also often wrong, and for small-to-medium effects it’s often tough to say when we’re right and when we’re wrong without a technique that severs all possible links between confounders and outcome.