In almost all statements of the CLT, there is a requirement that all the individual distributions for random variables be identical and independent. The kind of sequential evidence you normally get in real life is almost never of this form. We have to go out of our way to make it have this form, for example by establishing suitable stringent experimental procedures and ignoring all evidence that doesn’t go through those procedures.
The example you gave of convolving beta distributions does relax the “identical” part of CLT, though they are still assumed to be independent. There are variations on the CLT that do apply to independent but not identical distributions, but the conditions on the individual distributions are even stronger than in the CLT. For example, one theorem applies to strongly unimodal distributions with variance bounded both above and away from 0, and tails that fall off at least exponentially. Most beta distributions satisfy this, so if you convolve enough of these you’ll always get something approximately Gaussian. One thing to be careful of is that the rate of convergence can be much slower than in the usual CLT, so in some cases you may need a lot of samples.
Once you remove independence though, all bets are off. The distribution of the sum of sequences of such variables is a projection of some almost arbitrary high-dimensional distribution, resulting in all sorts of possible forms. You can even sum increasing numbers of uniform random variables and get a sequence of discrete distributions that never converge to anything.
Thank you so much for the explanation! I haven’t looked into the behavior of dependent distributions in the context of the CLT at all. It’s totally believable that non-independence could destroy convergence properties in a way that non-identicality doesn’t. I’m on my phone right now but will probably add a disclaimer to the end of this post to reflect your challenge to it. Thanks again~
Hmm—a big problem if so! Can you give me an example of the kind of intermediate conditional distribution you mean?
In almost all statements of the CLT, there is a requirement that all the individual distributions for random variables be identical and independent. The kind of sequential evidence you normally get in real life is almost never of this form. We have to go out of our way to make it have this form, for example by establishing suitable stringent experimental procedures and ignoring all evidence that doesn’t go through those procedures.
The example you gave of convolving beta distributions does relax the “identical” part of CLT, though they are still assumed to be independent. There are variations on the CLT that do apply to independent but not identical distributions, but the conditions on the individual distributions are even stronger than in the CLT. For example, one theorem applies to strongly unimodal distributions with variance bounded both above and away from 0, and tails that fall off at least exponentially. Most beta distributions satisfy this, so if you convolve enough of these you’ll always get something approximately Gaussian. One thing to be careful of is that the rate of convergence can be much slower than in the usual CLT, so in some cases you may need a lot of samples.
Once you remove independence though, all bets are off. The distribution of the sum of sequences of such variables is a projection of some almost arbitrary high-dimensional distribution, resulting in all sorts of possible forms. You can even sum increasing numbers of uniform random variables and get a sequence of discrete distributions that never converge to anything.
Thank you so much for the explanation! I haven’t looked into the behavior of dependent distributions in the context of the CLT at all. It’s totally believable that non-independence could destroy convergence properties in a way that non-identicality doesn’t. I’m on my phone right now but will probably add a disclaimer to the end of this post to reflect your challenge to it. Thanks again~