To answer that question, you first have to specify how the number that serves as the measure of intelligence is obtained. Unlike with height, there is no obvious simple way to come up with a number, and elaborate methods can always be engineered so as to change the resulting distribution.
In fact, at the time when I delved into the IQ research literature to try and make some sense out of these controversies, one of my major frustrations was that nobody, to my knowledge, asks the following question. Once a test has been normed to produce a normal distribution for a given population, what exact patterns of deviation from normal distribution do we see when we try to apply it to different populations (or to various non-representative subpopulations)? It seems to me that a whole lot of insight about the Flynn effect and other mysterious phenomena could be gained this way, and yet as far as I know, nobody has done it.
I think Vladimir has the right of it—it’s neither clear how best to measure ‘intelligence’ nor how what-is-measured-by-IQ is ‘naturally’ distributed.
As I understand it, we see what seems to be higher levels of separation between, say, IQs of 100 and 110 than between 160 and 170. This suggests to me that the scale is ‘stretched’ at the high end (though not at the low end?).
I’m not an expert, but no, I don’t think so. I think IQ tests are normalized, (set so they have the same mean and standard deviation), but the distribution could still be non-normal. Of course, the tests are controlled by many small factors (questions) which perhaps gives another reason for why the observed distribution would be normal.
I thought IQ was distributed on a bell curve by the design of the metric?
I hadn’t thought of that! Is there any independent reason to believe that intelligence is “naturally” distributed this way?
To answer that question, you first have to specify how the number that serves as the measure of intelligence is obtained. Unlike with height, there is no obvious simple way to come up with a number, and elaborate methods can always be engineered so as to change the resulting distribution.
In fact, at the time when I delved into the IQ research literature to try and make some sense out of these controversies, one of my major frustrations was that nobody, to my knowledge, asks the following question. Once a test has been normed to produce a normal distribution for a given population, what exact patterns of deviation from normal distribution do we see when we try to apply it to different populations (or to various non-representative subpopulations)? It seems to me that a whole lot of insight about the Flynn effect and other mysterious phenomena could be gained this way, and yet as far as I know, nobody has done it.
I think Vladimir has the right of it—it’s neither clear how best to measure ‘intelligence’ nor how what-is-measured-by-IQ is ‘naturally’ distributed.
As I understand it, we see what seems to be higher levels of separation between, say, IQs of 100 and 110 than between 160 and 170. This suggests to me that the scale is ‘stretched’ at the high end (though not at the low end?).
It’s an interesting question!
I’m not an expert, but no, I don’t think so. I think IQ tests are normalized, (set so they have the same mean and standard deviation), but the distribution could still be non-normal. Of course, the tests are controlled by many small factors (questions) which perhaps gives another reason for why the observed distribution would be normal.