Then the next step is to put these data in the bell curve, saying: “therefore 23⁄50 means 0 sigma = 100 IQ” and “therefore 41⁄50 means 2 sigma = 130 IQ”.
This is NOT forcing the outcome to be a bell curve. This is just normalizing to a given mean and standard deviation, a linear operation that does not change the shape of the distribution.
Consider a hypothetical case where an IQ test consists of 100 questions and 100 people take it. These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100]. Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100].
This is a fact about the test.
Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
Maybe it was wrong for me to use the word “normalization” in this context, but no, the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Otherwise every intelligence test would produce a different intelligence curve, because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task. (Just try to imagine how you would try to obtain the set of 100 questions for which the distribution of raw scores is linear. Keep in mind that every testing on many real subjects costs you a lot of money, and on a few subjects you won’t get statistical significance.)
the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Could you provide links showing this to be the case?
because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task.
Not exactly Gaussian—that’s even theoretically impossible because a Gaussian has infinitely long tails—but approximately Gaussian. Bell-shaped, in other words.
An IQ test in which the scores are only normalized linearly is a worse approximation to a Gaussian distribution than one which is intentionally designed to give Gaussianly distributed scores.
This is NOT forcing the outcome to be a bell curve. This is just normalizing to a given mean and standard deviation, a linear operation that does not change the shape of the distribution.
Consider a hypothetical case where an IQ test consists of 100 questions and 100 people take it. These hundred people all get a different number of questions correct—from 1 to 100: the distribution of the number of correct answers is flat or uniform over [1 .. 100]. Now you normalize the mean to 100 and one standard deviation to 15 -- and yet the distribution remains flat and does not magically become a bell curve.
This is a fact about the test.
Maybe it was wrong for me to use the word “normalization” in this context, but no, the distribution of raw scores is not mapped linearly to the distribution of IQs. It is mapped onto the bell curve.
Otherwise every intelligence test would produce a different intelligence curve, because inventing 100 questions such that they get the same distribution of raw scores as some other set of 100 questions, that would be an impossible task. (Just try to imagine how you would try to obtain the set of 100 questions for which the distribution of raw scores is linear. Keep in mind that every testing on many real subjects costs you a lot of money, and on a few subjects you won’t get statistical significance.)
Could you provide links showing this to be the case?
There is a helpful theorem.
It assumes that all the variables you’re summing are independent.
Weaker forms of CLT hold up even if you relax the independence assumption. See Wikipedia for details.
As a practical matter, in IQ testing even with only linear normalization of raw scores you will get something approximately Gaussian.
I wouldn’t count on that more than about one standard deviation away from the mean.
Not exactly Gaussian—that’s even theoretically impossible because a Gaussian has infinitely long tails—but approximately Gaussian. Bell-shaped, in other words.
Fallacy of grey. Certain approximations are worse than others.
So in this particular example, which approximation is worse than which other approximation and by which metric?
An IQ test in which the scores are only normalized linearly is a worse approximation to a Gaussian distribution than one which is intentionally designed to give Gaussianly distributed scores.
Well, duh, but I don’t see the point.