So, one of the known things is that standard deviation varies by race. For example, both the African American mean and variance are lower than the European American mean and variance.
To the best of my knowledge, few people have actually applied goodness of fit tests to IQ score distributions to check normality.
So, one of the known things is that standard deviation varies by race. For example, both the African American
mean and variance are lower than the European American mean and variance.
Hm. When I read the great-grandparent earlier, I got the impression it would be helpful to corroborate this claim in the great-great-grandparent:
In any population other than the one for which the test has been normed to follow a normal distribution with mean of 100 and standard deviation of 15, yes, results need not be normally distributed or to have a standard deviation of 15.
Rereading the great-grandparent now, it’s not clear to me why I got that impression. (I may have been thinking that the “general population,” as it contains distinct subpopulations, will be at best a mixture Gaussian rather than a Gaussian.)
I do agree that private_messaging’s claim- that the ratio we see at the tails doesn’t seem to follow what would be predicted by the normal distribution- hinges on the right tail being fatter than what the normal distribution predicts. (The mixture Gaussian claim is irrelevant if you’ve split the general population up into subpopulations that are normally distributed, unless the low IQ group contains subpopulations, so it isn’t normally distributed. There’s some reason to believe this is true for African Americans, for example, if you don’t separate out people by ancestry and recency of immigration.)
The data is sparse enough that I would not be surprised if this were the case, but I don’t think anyone’s directly investigated it, and a few of the investigations that hinge on the thickness of the tails (like Sex Differences in Mathematical Aptitude, which predicts female representation in elite math institutions by looking at the mean and variance of math SAT scores of large populations) seem to have worked well, which is evidence for normality.
So, one of the known things is that standard deviation varies by race. For example, both the African American mean and variance are lower than the European American mean and variance.
To the best of my knowledge, few people have actually applied goodness of fit tests to IQ score distributions to check normality.
I don’t understand why this is relevant.
Hm. When I read the great-grandparent earlier, I got the impression it would be helpful to corroborate this claim in the great-great-grandparent:
Rereading the great-grandparent now, it’s not clear to me why I got that impression. (I may have been thinking that the “general population,” as it contains distinct subpopulations, will be at best a mixture Gaussian rather than a Gaussian.)
I do agree that private_messaging’s claim- that the ratio we see at the tails doesn’t seem to follow what would be predicted by the normal distribution- hinges on the right tail being fatter than what the normal distribution predicts. (The mixture Gaussian claim is irrelevant if you’ve split the general population up into subpopulations that are normally distributed, unless the low IQ group contains subpopulations, so it isn’t normally distributed. There’s some reason to believe this is true for African Americans, for example, if you don’t separate out people by ancestry and recency of immigration.)
The data is sparse enough that I would not be surprised if this were the case, but I don’t think anyone’s directly investigated it, and a few of the investigations that hinge on the thickness of the tails (like Sex Differences in Mathematical Aptitude, which predicts female representation in elite math institutions by looking at the mean and variance of math SAT scores of large populations) seem to have worked well, which is evidence for normality.