But means and standard deviations are the results of modeling a gaussian distribution, and if the model fit is too bad, these metrics simply don’t apply for this dataset.
?
Means and standard deviations are general properties one can compute for any statistical distribution which doesn’t have pathologically fat tails. (Granted, it would’ve been conceptually cleaner for Yvain to present the mean & SD of log donations, but there’s nothing stopping us from using his mean & SD to estimate the parameters of e.g. a log-normal distribution instead of a normal distribution.)
You can indeed compute means and standard distributions for any distribution with small enough tails, but if the distribution is far from normal then they may not be very useful statistics. E.g., an important reason why the mean of a bunch of samples is an interesting statistic is that if the underlying distribution is normal then the sample mean is the maximum-likelihood estimator of the distribution’s mean. But, e.g., if the underlying distribution is a double exponential then the max-likelihood estimator for its position is the median rather than the mean. Or if the distribution is Cauchy then the sample mean is just as noisy as a single sample.
Thanks for prompting me to take a closer look at this.
The distribution is certainly very positively skewed, but for that reason that histogram is a blunt diagnostic. Almost all of the probability mass is lumped into the first bar, so it’s impossible to see how the probability distribution looks for small donations. There could be a power law there, but it’s not obvious that the distribution isn’t just log-normal with enough dispersion to produce lots of small values.
Looking at the actual numbers from the survey data file, I see it’s impossible for the distribution to be strictly log-normal or a power law, because neither distribution includes zero in its support, while zero is actually the most common donation reported.
I can of course still ask which distribution best fits the rest of the donation data. A quick & dirty way to eyeball this is to take logs of the non-zero donations and plot their distribution. If the non-zero donations are log-normal, I’ll see a bell curve; if the non-zero donations are Pareto, I’ll see a monotonically downward curve. I plot the kernel density estimate (instead of a histogram ’cause binning throws away information) and I see
which is definitely closer to a bell curve. So the donations seem closer to a lognormal distribution than a Pareto distribution. Still, the log-donation distribution probably isn’t exactly normal (looks a bit too much like a cone to me). Let’s slap a normal distribution on top and see how that looks. Looks like the mean is about 6 and the standard deviation about 2?
Wow, that’s a far closer match than it has any right to be! Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero. But the non-zero donations look impressively close to a log-normal distribution, and I really doubt a Pareto distribution would fit them better. (And in general it’s easy to see Pareto distributions where they don’t really exist.)
Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero.
As I understand it, tests of normality are not all that useful because: they are underpowered & won’t reject normality at the small samples where you need to know about non-normality because it’ll badly affect your conclusions; and at larger samples like the LW survey, because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results (because the sample is now large enough to benefit from the asymptotics and various robustnesses).
When I was looking at donations vs EA status earlier this year, I just added +1 to remove the zero-inflation, and then logged donation amount. Seemed to work well. A zero-inflated log-normal might have worked even better.
Also, you don’t have to look at only one year’s data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.
As I understand it, tests of normality are not all that useful because: they are underpowered & won’t reject normality at the small samples where you need to know about non-normality because it’ll badly affect your conclusions; and at larger samples [...], because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results
I agree that normality tests are too insensitive for most small samples, and too sensitive for pretty much any big sample, but I’d presumed there was a sweet spot (when the sample size is a few hundred) where normality tests have decent sensitivity without giving everything a negligible p-value, and that the LW survey is near that sweet spot. If I’d been lazy and used R’s out-of-the-box normality test (Shapiro-Wilk) instead of following goocy’s recommendation (Lilliefors, which R hides in its nortest library) I’d have got an insignificant p of 0.11, so the sample [edit: of non-zero donations] evidently isn’t large enough to guarantee rejection by normality tests in general.
Also, you don’t have to look at only one year’s data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.
Certainly. It might be interesting to investigate whether the log-normal-with-zeroes distribution holds up in earlier years, and if so, whether the distribution’s parameters drift over time. Still, goocy’s complaint was about 2014′s data, so I stuck with that.
?
Means and standard deviations are general properties one can compute for any statistical distribution which doesn’t have pathologically fat tails. (Granted, it would’ve been conceptually cleaner for Yvain to present the mean & SD of log donations, but there’s nothing stopping us from using his mean & SD to estimate the parameters of e.g. a log-normal distribution instead of a normal distribution.)
Is the link to “Logical disjunction” intentional?
It isn’t! Thanks for catching that, I’ve fixed the link.
You can indeed compute means and standard distributions for any distribution with small enough tails, but if the distribution is far from normal then they may not be very useful statistics. E.g., an important reason why the mean of a bunch of samples is an interesting statistic is that if the underlying distribution is normal then the sample mean is the maximum-likelihood estimator of the distribution’s mean. But, e.g., if the underlying distribution is a double exponential then the max-likelihood estimator for its position is the median rather than the mean. Or if the distribution is Cauchy then the sample mean is just as noisy as a single sample.
I’d expect a Pareto distribution for charitable donations, not log-normal, and that’s exactly what the histogram looks like:
Looks like alpha >> 2, so the variance is infinite.
Thanks for prompting me to take a closer look at this.
The distribution is certainly very positively skewed, but for that reason that histogram is a blunt diagnostic. Almost all of the probability mass is lumped into the first bar, so it’s impossible to see how the probability distribution looks for small donations. There could be a power law there, but it’s not obvious that the distribution isn’t just log-normal with enough dispersion to produce lots of small values.
Looking at the actual numbers from the survey data file, I see it’s impossible for the distribution to be strictly log-normal or a power law, because neither distribution includes zero in its support, while zero is actually the most common donation reported.
I can of course still ask which distribution best fits the rest of the donation data. A quick & dirty way to eyeball this is to take logs of the non-zero donations and plot their distribution. If the non-zero donations are log-normal, I’ll see a bell curve; if the non-zero donations are Pareto, I’ll see a monotonically downward curve. I plot the kernel density estimate (instead of a histogram ’cause binning throws away information) and I see
which is definitely closer to a bell curve. So the donations seem closer to a lognormal distribution than a Pareto distribution. Still, the log-donation distribution probably isn’t exactly normal (looks a bit too much like a cone to me). Let’s slap a normal distribution on top and see how that looks. Looks like the mean is about 6 and the standard deviation about 2?
Wow, that’s a far closer match than it has any right to be! Admittedly, if I ask R to run a Lilliefors test, the test rejects the hypothesis of normality (p = 0.0007), and it remains the case that the donations are neither log-normal nor power-law distributed because some of the values are zero. But the non-zero donations look impressively close to a log-normal distribution, and I really doubt a Pareto distribution would fit them better. (And in general it’s easy to see Pareto distributions where they don’t really exist.)
As I understand it, tests of normality are not all that useful because: they are underpowered & won’t reject normality at the small samples where you need to know about non-normality because it’ll badly affect your conclusions; and at larger samples like the LW survey, because real-world data is rarely exactly normal, they will always reject normality even when it makes not the slightest difference to your results (because the sample is now large enough to benefit from the asymptotics and various robustnesses).
When I was looking at donations vs EA status earlier this year, I just added +1 to remove the zero-inflation, and then logged donation amount. Seemed to work well. A zero-inflated log-normal might have worked even better.
Also, you don’t have to look at only one year’s data; you can look at 3 or 4 by making sure to filter out responses based whether they report answering a previous survey.
I agree that normality tests are too insensitive for most small samples, and too sensitive for pretty much any big sample, but I’d presumed there was a sweet spot (when the sample size is a few hundred) where normality tests have decent sensitivity without giving everything a negligible p-value, and that the LW survey is near that sweet spot. If I’d been lazy and used R’s out-of-the-box normality test (Shapiro-Wilk) instead of following goocy’s recommendation (Lilliefors, which R hides in its
nortest
library) I’d have got an insignificant p of 0.11, so the sample [edit: of non-zero donations] evidently isn’t large enough to guarantee rejection by normality tests in general.Certainly. It might be interesting to investigate whether the log-normal-with-zeroes distribution holds up in earlier years, and if so, whether the distribution’s parameters drift over time. Still, goocy’s complaint was about 2014′s data, so I stuck with that.