I believe this entry could have been written in much more general terms, i.e., why using [Gaussian] approximations at all nowadays. There is one answer: get general, asymptotic results. But in practice, given the current status of computers and statistical software, there is no point in using approximations. Particularly, as they do not work for small samples, as the ones you mention. And, in practice, we need to deal with small samples as well.
The general advice would then be: if you need to model some random phenomenon, use the tools that allow to model it best. If beta, Poisson, gamma, etc. distributions seem more adequate, just do not use normal approximations at all.
I believe this entry could have been written in much more general terms, i.e., why using [Gaussian] approximations at all nowadays. There is one answer: get general, asymptotic results. But in practice, given the current status of computers and statistical software, there is no point in using approximations. Particularly, as they do not work for small samples, as the ones you mention. And, in practice, we need to deal with small samples as well.
The general advice would then be: if you need to model some random phenomenon, use the tools that allow to model it best. If beta, Poisson, gamma, etc. distributions seem more adequate, just do not use normal approximations at all.