Ok, it’s possible that all of the following happened:
Most of the 1000 people decided to lie about their IQ on the LessWrong survey.
Most of the liars realized that their personality test results were going to be compared with Mensa’s personality type results, and it dawned on them that this would bring their IQ lie into question.
Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too.
Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions.
Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required.
Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They’d have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven’t verified that). They’d have to have enough of an understanding of what intelligent people are like that they’d choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted:
“You are more interested in a general idea than in the details of its realization”
(Do intelligent people like ideas or details more?)
“Strict observance of the established rules is likely to prevent a good outcome”
(Either could be the smarter answer, depending who you ask.)
“You believe the best decision is one that can be easily changed”
(It’s smart to leave your options open, but it’s also more intellectually self-confident and potentially more rewarding to take a risk based on your decision-making abilities.)
“The process of searching for a solution is more important to you than the solution itself”
(Maybe intelligence makes playing with ideas so enjoyable, gifted people see having the solution as less important.)
“When considering a situation you pay more attention to the current situation and less to a possible sequence of events”
(There are those that would consider either one of these to be the smarter one.)
There were a lot of questions that you could guess are correlated with intelligence on the test, and some of them are no-brainers, but are there enough of those no-brainers with obvious intelligence correlation that a non-gifted person intent on looking as intelligent as possible would be able to successfully fudge their personality type?
The massive fudging didn’t create some totally unexpected personality type pattern. For instance, most people are extraverted. Would they realize the intelligence implications and fudge enough extravert questions to replicate Mensa’s introverted pattern? Would they know to choose the judging questions over the perceiving questions would make them look like Mensans? It makes sense that the thinking vs. feeling and intuiting vs sensing metrics would use questions that would be of the type you’d obviously need to fudge, but why would they also choose introvert and judging answers?
The survey is anonymous and we don’t even know which people gave which IQ responses, let alone are they likely to receive any sort of reward from fudging their IQ score. Can you explain to me:
What reward would most of LessWrong want to get out of lying about their IQs?
Why, in an anonymous context where they can’t even take credit for claiming the IQ score they provided, most of LessWrong is expecting to receive any reward at all?
Can you explain to me why fudged personality type data would match my predictions? Even if they were trying to match them, how would they manage it?
“Lie” is a strawman. One could report an estimate, mis-remember, report the other “IQ” (mental age / chronological age metric), or one may have took any one of entirely faulty online tests that report IQ as high to increase the referral rate (some are bad enough to produce >100 if the answers are filled in at random).
This would be a good point in the event that we were not discussing IQ scores generated by an IQ test selected by Yvain, which many people took at the same time as filling out the survey. This method (and timing) rules out problems due to relying on estimates alone, most of the potential for mis-remembering, (neither of which should be assumed to be likely to result in an average score that’s 30 points too high, as mistakes like these could go in either direction), and, assuming that the IQ test Yvain selected was pretty good, it also rules out the problem of the test being seriously skewed. If you would like to continue this line of argument, one effective method of producing doubt would be to go to the specific IQ test in question, fill out all of the answers randomly, and report the IQ that it produces. If you want to generate a full-on update regarding those particular test results, complete with Yvain being likely to refrain from recommending this source during his next survey, write a script that fills out the test randomly and reports the results so that multiple people can run it and see for themselves what average IQ the test produces after a large number of trials. You may want to check to see whether Yvain or Gwern or someone has already done this before going to the trouble.
Also, there really were people whose concern it was that people were lying on the survey. Your “lie is a strawman” perception appears to have been formed due to not having read the (admittedly massive number of) comments on this.
neither of which should be assumed to be likely to result in an average score that’s 30 points too high, as mistakes like these could go in either direction
Look. People misremember (and remember the largest value, and so on) in the way most favourable to themselves. While mistakes can of course go in either direction, they don’t actually go in either direction. If you ask men to report their penis size (quite literally), they over-estimate; if you ask them to measure, they still overestimate but not by as much. This sort of error is absolutely the norm in any surveys. More so here, as the calibration (on Bayes date of birth question at least) was comparatively very bad.
The situation is anything but symmetric, given that the results are rather far from the mean, on a Gaussian.
Furthermore, given the interest in self improvement, people here are likely to have tried to improve their test scores by practice, which would have considerably lower effect on iqtest.dk unless you practice specifically the Raven’s matrices.
The low scores on iqtest.dk are particularly interesting in light that the scores on the latter are a result of better assignment of priors / processing of probabilities (as, fundamentally, one needs to pick the choice which results in simplest—highest probability—overall pattern. If one is overconfident about the pattern they see being the best, one’s score is lowered, so poor calibration will hurt that test more).
While mistakes can of course go in either direction, they don’t actually go in either direction.
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of “undiscriminating scepticism”, I am requesting to see the evidence of this.
I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it’s likely for the room for error to be pretty big. On that we agree.
An average thirty points higher than normal seems to me to be quite a lot more than “pretty big”. That’s the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that’s like having a 6-incher and saying it’s 12. I can see guys unconsciously saying it’s 7 if it’s 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they’re Mensa level gifted when they’re average. I’d bet that there was a significant amount of error, but not an average of 30 points.
If you agree with those two, then whether we agree over all just depends on what specific belief we’re each supporting.
I think these beliefs are supported:
The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate.
Despite inaccuracies, it’s very likely that the average LessWrong member has an IQ above average—in other words, I don’t think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ.
LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you’d find by random chance.
Do we agree on those three beliefs?
If not, then please phrase the belief(s) you want to support.
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership’s IQ values.
Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie.
However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards… but it’s an additional source of bias, independent of any bias in individual reported values.
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.)
(You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it’s not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias.
Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won’t respond, putting us back at square one).
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they’re inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported. Note that I’m not saying people must be consciously lying, though I wouldn’t rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.)
The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean.
There is a pretty obvious bias in that calculation: a lot of LWers are young and haven’t had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven’t yet, which could exaggerate the IQ of the high-school-or-less group.
I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren’t huge but I think they’re too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, ’cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level.
* Assuming people start university at age 18, take 3 years to get a bachelor’s, a year to get a master’s, and then up to 7 years to get a PhD, everyone who’s going to get a PhD will have one at age 29. In reality there’re a few laggards but not enough to make much difference; basically the same result comes out if I use age 30 or age 35 as a cutoff.
I can account for this bias by looking only at the people who said they were ≥29 years old.* Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general.
And I suspect if you look at the American population for that age cohort, you’ll find a lot higher a percentage than 13% which have a “high school education or less”… All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn’t close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading ‘high school education or less’ member, Eliezer Yudkowsky.
All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason.
Right, but even among LWers I’d still expect the dropouts to have a lower average IQ if all that’s going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+.
Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that’s above the horizontal line by drawing a vertical line. That’s the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they’re winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more).
Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises “the most intelligent end” of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won’t be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ.
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean,
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough.
Doing it properly, I should really compare the PhD holders’ IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
It’s a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn’t too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that’s around 25 points.
Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve.
Any time I see “undiscriminating scepticism” mentioned, it’s a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it’s not true “undiscriminating scepticism”. If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there’s some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
I intuit that this is likely to be a popular view among sceptics,
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there’s a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I’m pretty sure most people believed that a bad question couldn’t have that much of an impact).
but I do not recall ever being presented with research that supports this by anyone.
Wikipedia to the rescue, given that a lot of stuff is behind the paywall...
“The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.”
Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way—mere “above average”—it is a set of completely unremarkable beliefs.
It’d be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can’t find any info.
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy).
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
Wow, that’s quite interesting—that’s some serious Dunning-Kruger. Scatterplot could be of interest.
Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction.
Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others “punish” them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
something around [1 in] 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
That sounds pretty high to me. I haven’t looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population.
You know, people do lie to themselves. It’s a sad but true (and well known around here) fact about human psychology that humans have surprisingly bad models of themselves. It is simply true that if you asked a bunch of people selected at random about their (self-reported) IQ scores, you would get an average of more than 100. One would hope that LessWrongers are good enough at detecting bias in order to mostly dodge that bullet, but the evidence of whether or not we actually are that good at it is scarce at best.
Your unintentional lie explanation does not explain how the SAT scores ended up so closely synchronised to the IQ scores—as we know, one common sign of a lie is that the details do not add up. Synchronising one’s SAT scores to the same level as one’s IQ scores would most likely require conscious effort, making the discrepancy obvious to the LessWrong members who took the survey. If you would argue that they were likely to have chosen corresponding SAT scores in some way that did not require them to become consciously aware of discrepancies in order to synchronize the scores, how would you support the argument that they synched them on accident? If not, then would you support the argument that LessWrong members consciously lied about it?
Linda Silverman, a giftedness researcher, has observed that parents are actually pretty decent at assessing their child’s intellectual abilities despite the obvious cause for bias.
“In this study, 84% of the children whose parents indicated that they fit three-fourths of the characteristics tested above 120 IQ. ” (An unpublished study, unfortunately.)
This isn’t exactly the same as managing knowledge of one’s own intellectual abilities, but if it would seem to you that parents would most likely be hideously biased when assessing their children’s intellectual abilities even though, according to a giftedness researcher, this is probably not the case, then should you probably also consider that your concern that most LessWrong members are likely to subconsciously falsify their own IQ scores by a whopping 30 points (if that is your perception) may be far less likely to be a problem than you thought?
Ok, it’s possible that all of the following happened:
Most of the 1000 people decided to lie about their IQ on the LessWrong survey.
Most of the liars realized that their personality test results were going to be compared with Mensa’s personality type results, and it dawned on them that this would bring their IQ lie into question.
Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too.
Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions.
Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required.
Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They’d have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven’t verified that). They’d have to have enough of an understanding of what intelligent people are like that they’d choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted:
“You are more interested in a general idea than in the details of its realization”
(Do intelligent people like ideas or details more?)
“Strict observance of the established rules is likely to prevent a good outcome”
(Either could be the smarter answer, depending who you ask.)
“You believe the best decision is one that can be easily changed”
(It’s smart to leave your options open, but it’s also more intellectually self-confident and potentially more rewarding to take a risk based on your decision-making abilities.)
“The process of searching for a solution is more important to you than the solution itself”
(Maybe intelligence makes playing with ideas so enjoyable, gifted people see having the solution as less important.)
“When considering a situation you pay more attention to the current situation and less to a possible sequence of events”
(There are those that would consider either one of these to be the smarter one.)
There were a lot of questions that you could guess are correlated with intelligence on the test, and some of them are no-brainers, but are there enough of those no-brainers with obvious intelligence correlation that a non-gifted person intent on looking as intelligent as possible would be able to successfully fudge their personality type?
The massive fudging didn’t create some totally unexpected personality type pattern. For instance, most people are extraverted. Would they realize the intelligence implications and fudge enough extravert questions to replicate Mensa’s introverted pattern? Would they know to choose the judging questions over the perceiving questions would make them look like Mensans? It makes sense that the thinking vs. feeling and intuiting vs sensing metrics would use questions that would be of the type you’d obviously need to fudge, but why would they also choose introvert and judging answers?
The survey is anonymous and we don’t even know which people gave which IQ responses, let alone are they likely to receive any sort of reward from fudging their IQ score. Can you explain to me:
What reward would most of LessWrong want to get out of lying about their IQs?
Why, in an anonymous context where they can’t even take credit for claiming the IQ score they provided, most of LessWrong is expecting to receive any reward at all?
Can you explain to me why fudged personality type data would match my predictions? Even if they were trying to match them, how would they manage it?
“Lie” is a strawman. One could report an estimate, mis-remember, report the other “IQ” (mental age / chronological age metric), or one may have took any one of entirely faulty online tests that report IQ as high to increase the referral rate (some are bad enough to produce >100 if the answers are filled in at random).
This would be a good point in the event that we were not discussing IQ scores generated by an IQ test selected by Yvain, which many people took at the same time as filling out the survey. This method (and timing) rules out problems due to relying on estimates alone, most of the potential for mis-remembering, (neither of which should be assumed to be likely to result in an average score that’s 30 points too high, as mistakes like these could go in either direction), and, assuming that the IQ test Yvain selected was pretty good, it also rules out the problem of the test being seriously skewed. If you would like to continue this line of argument, one effective method of producing doubt would be to go to the specific IQ test in question, fill out all of the answers randomly, and report the IQ that it produces. If you want to generate a full-on update regarding those particular test results, complete with Yvain being likely to refrain from recommending this source during his next survey, write a script that fills out the test randomly and reports the results so that multiple people can run it and see for themselves what average IQ the test produces after a large number of trials. You may want to check to see whether Yvain or Gwern or someone has already done this before going to the trouble.
Also, there really were people whose concern it was that people were lying on the survey. Your “lie is a strawman” perception appears to have been formed due to not having read the (admittedly massive number of) comments on this.
Look. People misremember (and remember the largest value, and so on) in the way most favourable to themselves. While mistakes can of course go in either direction, they don’t actually go in either direction. If you ask men to report their penis size (quite literally), they over-estimate; if you ask them to measure, they still overestimate but not by as much. This sort of error is absolutely the norm in any surveys. More so here, as the calibration (on Bayes date of birth question at least) was comparatively very bad.
The situation is anything but symmetric, given that the results are rather far from the mean, on a Gaussian.
Furthermore, given the interest in self improvement, people here are likely to have tried to improve their test scores by practice, which would have considerably lower effect on iqtest.dk unless you practice specifically the Raven’s matrices.
The low scores on iqtest.dk are particularly interesting in light that the scores on the latter are a result of better assignment of priors / processing of probabilities (as, fundamentally, one needs to pick the choice which results in simplest—highest probability—overall pattern. If one is overconfident about the pattern they see being the best, one’s score is lowered, so poor calibration will hurt that test more).
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of “undiscriminating scepticism”, I am requesting to see the evidence of this.
I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it’s likely for the room for error to be pretty big. On that we agree.
An average thirty points higher than normal seems to me to be quite a lot more than “pretty big”. That’s the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that’s like having a 6-incher and saying it’s 12. I can see guys unconsciously saying it’s 7 if it’s 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they’re Mensa level gifted when they’re average. I’d bet that there was a significant amount of error, but not an average of 30 points.
If you agree with those two, then whether we agree over all just depends on what specific belief we’re each supporting.
I think these beliefs are supported:
The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate.
Despite inaccuracies, it’s very likely that the average LessWrong member has an IQ above average—in other words, I don’t think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ.
LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you’d find by random chance.
Do we agree on those three beliefs?
If not, then please phrase the belief(s) you want to support.
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership’s IQ values.
Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie.
However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards… but it’s an additional source of bias, independent of any bias in individual reported values.
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.)
(You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it’s not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
Ah! Good point! Karma for you! Now I will think about whether there is a way to figure out the truth despite this.
Ideas?
Hmmm. Tricky.
Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias.
Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won’t respond, putting us back at square one).
It’s probably also worth noting that this is a known problem in statistics which is not easy to compensate for.
There’s also the selection effect of only getting answers from “people who , when asked, can actually name their IQ”.
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they’re inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported. Note that I’m not saying people must be consciously lying, though I wouldn’t rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.)
The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean.
There is a pretty obvious bias in that calculation: a lot of LWers are young and haven’t had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven’t yet, which could exaggerate the IQ of the high-school-or-less group.
I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren’t huge but I think they’re too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, ’cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level.
* Assuming people start university at age 18, take 3 years to get a bachelor’s, a year to get a master’s, and then up to 7 years to get a PhD, everyone who’s going to get a PhD will have one at age 29. In reality there’re a few laggards but not enough to make much difference; basically the same result comes out if I use age 30 or age 35 as a cutoff.
And I suspect if you look at the American population for that age cohort, you’ll find a lot higher a percentage than 13% which have a “high school education or less”… All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn’t close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading ‘high school education or less’ member, Eliezer Yudkowsky.
Right, but even among LWers I’d still expect the dropouts to have a lower average IQ if all that’s going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+.
Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that’s above the horizontal line by drawing a vertical line. That’s the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they’re winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more).
Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises “the most intelligent end” of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won’t be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
How significant is that difference?
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough.
Doing it properly, I should really compare the PhD holders’ IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
It’s a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn’t too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that’s around 25 points.
I was really asking about sample size, as I was too lazy to grab the raw data.
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve.
Any time I see “undiscriminating scepticism” mentioned, it’s a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it’s not true “undiscriminating scepticism”. If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there’s some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there’s a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I’m pretty sure most people believed that a bad question couldn’t have that much of an impact).
Wikipedia to the rescue, given that a lot of stuff is behind the paywall...
http://en.wikipedia.org/wiki/Illusory_superiority#IQ
“The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.”
and more amusingly
http://en.wikipedia.org/wiki/Human_penis_size#Erect_length
Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way—mere “above average”—it is a set of completely unremarkable beliefs.
It’d be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can’t find any info.
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
Wow, that’s quite interesting—that’s some serious Dunning-Kruger. Scatterplot could be of interest.
Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction.
Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others “punish” them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning.
The actual data is even worse than what comparisons of prevalence would suggest − 25% of people put themselves in the top 1% in some circumstances.
Yes, average of 115 would be possible.
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
That sounds pretty high to me. I haven’t looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population.
Something similar, perhaps?
You know, people do lie to themselves. It’s a sad but true (and well known around here) fact about human psychology that humans have surprisingly bad models of themselves. It is simply true that if you asked a bunch of people selected at random about their (self-reported) IQ scores, you would get an average of more than 100. One would hope that LessWrongers are good enough at detecting bias in order to mostly dodge that bullet, but the evidence of whether or not we actually are that good at it is scarce at best.
Your unintentional lie explanation does not explain how the SAT scores ended up so closely synchronised to the IQ scores—as we know, one common sign of a lie is that the details do not add up. Synchronising one’s SAT scores to the same level as one’s IQ scores would most likely require conscious effort, making the discrepancy obvious to the LessWrong members who took the survey. If you would argue that they were likely to have chosen corresponding SAT scores in some way that did not require them to become consciously aware of discrepancies in order to synchronize the scores, how would you support the argument that they synched them on accident? If not, then would you support the argument that LessWrong members consciously lied about it?
Linda Silverman, a giftedness researcher, has observed that parents are actually pretty decent at assessing their child’s intellectual abilities despite the obvious cause for bias.
“In this study, 84% of the children whose parents indicated that they fit three-fourths of the characteristics tested above 120 IQ. ” (An unpublished study, unfortunately.)
http://www.gifteddevelopment.com/PDF_files/scalersrch.pdf
This isn’t exactly the same as managing knowledge of one’s own intellectual abilities, but if it would seem to you that parents would most likely be hideously biased when assessing their children’s intellectual abilities even though, according to a giftedness researcher, this is probably not the case, then should you probably also consider that your concern that most LessWrong members are likely to subconsciously falsify their own IQ scores by a whopping 30 points (if that is your perception) may be far less likely to be a problem than you thought?