While mistakes can of course go in either direction, they don’t actually go in either direction.
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of “undiscriminating scepticism”, I am requesting to see the evidence of this.
I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it’s likely for the room for error to be pretty big. On that we agree.
An average thirty points higher than normal seems to me to be quite a lot more than “pretty big”. That’s the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that’s like having a 6-incher and saying it’s 12. I can see guys unconsciously saying it’s 7 if it’s 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they’re Mensa level gifted when they’re average. I’d bet that there was a significant amount of error, but not an average of 30 points.
If you agree with those two, then whether we agree over all just depends on what specific belief we’re each supporting.
I think these beliefs are supported:
The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate.
Despite inaccuracies, it’s very likely that the average LessWrong member has an IQ above average—in other words, I don’t think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ.
LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you’d find by random chance.
Do we agree on those three beliefs?
If not, then please phrase the belief(s) you want to support.
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership’s IQ values.
Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie.
However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards… but it’s an additional source of bias, independent of any bias in individual reported values.
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.)
(You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it’s not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias.
Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won’t respond, putting us back at square one).
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they’re inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported. Note that I’m not saying people must be consciously lying, though I wouldn’t rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.)
The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean.
There is a pretty obvious bias in that calculation: a lot of LWers are young and haven’t had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven’t yet, which could exaggerate the IQ of the high-school-or-less group.
I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren’t huge but I think they’re too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, ’cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level.
* Assuming people start university at age 18, take 3 years to get a bachelor’s, a year to get a master’s, and then up to 7 years to get a PhD, everyone who’s going to get a PhD will have one at age 29. In reality there’re a few laggards but not enough to make much difference; basically the same result comes out if I use age 30 or age 35 as a cutoff.
I can account for this bias by looking only at the people who said they were ≥29 years old.* Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general.
And I suspect if you look at the American population for that age cohort, you’ll find a lot higher a percentage than 13% which have a “high school education or less”… All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn’t close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading ‘high school education or less’ member, Eliezer Yudkowsky.
All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason.
Right, but even among LWers I’d still expect the dropouts to have a lower average IQ if all that’s going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+.
Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that’s above the horizontal line by drawing a vertical line. That’s the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they’re winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more).
Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises “the most intelligent end” of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won’t be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ.
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean,
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough.
Doing it properly, I should really compare the PhD holders’ IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
It’s a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn’t too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that’s around 25 points.
Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve.
Any time I see “undiscriminating scepticism” mentioned, it’s a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it’s not true “undiscriminating scepticism”. If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there’s some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
I intuit that this is likely to be a popular view among sceptics,
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there’s a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I’m pretty sure most people believed that a bad question couldn’t have that much of an impact).
but I do not recall ever being presented with research that supports this by anyone.
Wikipedia to the rescue, given that a lot of stuff is behind the paywall...
“The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.”
Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way—mere “above average”—it is a set of completely unremarkable beliefs.
It’d be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can’t find any info.
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy).
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
Wow, that’s quite interesting—that’s some serious Dunning-Kruger. Scatterplot could be of interest.
Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction.
Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others “punish” them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
something around [1 in] 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
That sounds pretty high to me. I haven’t looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population.
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of “undiscriminating scepticism”, I am requesting to see the evidence of this.
I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it’s likely for the room for error to be pretty big. On that we agree.
An average thirty points higher than normal seems to me to be quite a lot more than “pretty big”. That’s the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that’s like having a 6-incher and saying it’s 12. I can see guys unconsciously saying it’s 7 if it’s 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they’re Mensa level gifted when they’re average. I’d bet that there was a significant amount of error, but not an average of 30 points.
If you agree with those two, then whether we agree over all just depends on what specific belief we’re each supporting.
I think these beliefs are supported:
The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate.
Despite inaccuracies, it’s very likely that the average LessWrong member has an IQ above average—in other words, I don’t think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ.
LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you’d find by random chance.
Do we agree on those three beliefs?
If not, then please phrase the belief(s) you want to support.
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership’s IQ values.
Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie.
However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards… but it’s an additional source of bias, independent of any bias in individual reported values.
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.)
(You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it’s not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
Ah! Good point! Karma for you! Now I will think about whether there is a way to figure out the truth despite this.
Ideas?
Hmmm. Tricky.
Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias.
Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won’t respond, putting us back at square one).
It’s probably also worth noting that this is a known problem in statistics which is not easy to compensate for.
There’s also the selection effect of only getting answers from “people who , when asked, can actually name their IQ”.
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they’re inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported. Note that I’m not saying people must be consciously lying, though I wouldn’t rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.)
The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean.
There is a pretty obvious bias in that calculation: a lot of LWers are young and haven’t had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven’t yet, which could exaggerate the IQ of the high-school-or-less group.
I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren’t huge but I think they’re too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, ’cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level.
* Assuming people start university at age 18, take 3 years to get a bachelor’s, a year to get a master’s, and then up to 7 years to get a PhD, everyone who’s going to get a PhD will have one at age 29. In reality there’re a few laggards but not enough to make much difference; basically the same result comes out if I use age 30 or age 35 as a cutoff.
And I suspect if you look at the American population for that age cohort, you’ll find a lot higher a percentage than 13% which have a “high school education or less”… All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn’t close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading ‘high school education or less’ member, Eliezer Yudkowsky.
Right, but even among LWers I’d still expect the dropouts to have a lower average IQ if all that’s going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+.
Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that’s above the horizontal line by drawing a vertical line. That’s the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they’re winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more).
Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises “the most intelligent end” of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won’t be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
How significant is that difference?
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough.
Doing it properly, I should really compare the PhD holders’ IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
It’s a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn’t too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that’s around 25 points.
I was really asking about sample size, as I was too lazy to grab the raw data.
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve.
Any time I see “undiscriminating scepticism” mentioned, it’s a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it’s not true “undiscriminating scepticism”. If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there’s some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there’s a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I’m pretty sure most people believed that a bad question couldn’t have that much of an impact).
Wikipedia to the rescue, given that a lot of stuff is behind the paywall...
http://en.wikipedia.org/wiki/Illusory_superiority#IQ
“The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.”
and more amusingly
http://en.wikipedia.org/wiki/Human_penis_size#Erect_length
Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way—mere “above average”—it is a set of completely unremarkable beliefs.
It’d be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can’t find any info.
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
Wow, that’s quite interesting—that’s some serious Dunning-Kruger. Scatterplot could be of interest.
Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction.
Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others “punish” them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning.
The actual data is even worse than what comparisons of prevalence would suggest − 25% of people put themselves in the top 1% in some circumstances.
Yes, average of 115 would be possible.
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
That sounds pretty high to me. I haven’t looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population.
Something similar, perhaps?