the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
Note that this talks about “performing the same experiment a large number of times”, which guarantees independence in absence of memory effects. It also talks solely about the sample average, nothing else.
What you probably mean is the Central limit theorem, which assumes no correlation between identically distributed random variables.
Talking about substituting sample size n*p for p trials of size n makes a lot more sense in a coin-flipping context than it does in, say, epidemiology. If I’m doing a cohort study, I need a group of people to all start being tracked at one timepoint (since that’s how I’m going to try to limit confounding).
Although it’s theoretically possible to keep adding additional people to track, the data get awfully messy. I’d rather not add them in bit by bit. I’d prefer two cohort studies with sample size of n to one study that kept adding to the panel and ended up with 3n people.
Right, but the mathematical meaning of the word “trial” is a little more general, in the sense that even if you pick the sample all at once, you can consider each member of the sample a “trial”.
No, it does not:
Note that this talks about “performing the same experiment a large number of times”, which guarantees independence in absence of memory effects. It also talks solely about the sample average, nothing else.
What you probably mean is the Central limit theorem, which assumes no correlation between identically distributed random variables.
I’m not sure what the distinction is. A sample of size n is the same thing as n trials ( though not necessarily n independent trials).
Talking about substituting sample size n*p for p trials of size n makes a lot more sense in a coin-flipping context than it does in, say, epidemiology. If I’m doing a cohort study, I need a group of people to all start being tracked at one timepoint (since that’s how I’m going to try to limit confounding).
Although it’s theoretically possible to keep adding additional people to track, the data get awfully messy. I’d rather not add them in bit by bit. I’d prefer two cohort studies with sample size of n to one study that kept adding to the panel and ended up with 3n people.
Right, but the mathematical meaning of the word “trial” is a little more general, in the sense that even if you pick the sample all at once, you can consider each member of the sample a “trial”.