Aspergers Poll Results: LW is nerdier than the Math Olympiad?
Followup to: Do you have High Functioning Aspergers Syndrome?
EDIT: To combat nonresponse bias, I’d appreciate it if anyone who considered the poll and decided not to fill it in would go and do so now, but that people who haven’t already seen the poll refrain from doing so. We might get some idea of which way the bias points by looking at the difference in results.
This is your opportunity to help your community’s social epistemology!
Since over 80 LW’ers were kind enough to fill out my survey on Aspergers, I thought I’d post the results.
4 people said they had already been diagnosed with Aspergers Syndrome, out of 82 responses. That’s 5%, where the population incidence rate is thought to be 0.36%. However the incidence rate is known to be larger than the diagnosis rate, as many AS cases (I don’t know how many) go undiagnosed. An additional 4 people ticked the five diagnostic criteria I listed; if we count each of them as 1⁄2 a case, LW would have roughly 25 times the baseline AS rate.
The Less Wrong mean average AQ test score was 27, and only 5 people got at or below 16, which is the population average score on this test. 21 people or 26% scored 32 or more on the AQ test, though this is only an indicator and does not mean that 26% of LW have Aspergers.
To put the AQ test results in perspective, this paragraph from Wikipedia outlines what various groups got on average:
The questionnaire was trialled on Cambridge University students, and a group of sixteen winners of the British Mathematical Olympiad, to determine whether there was a link between a talent for mathematical and scientific disciplines and traits associated with the autism spectrum. Mathematics, physical sciences and engineering students were found to score significantly higher, e.g. 21.8 on average for mathematicians and 21.4 for computer scientists. The average score for the British Mathematical Olympiad winners was 24. Of the students who scored 32 or more on the test, eleven agreed to be interviewed and seven of these were reported to meet the DSM-IV criteria for Asperger syndrome, although no formal diagnosis was made as they were not suffering any distress. The test was also taken by a group of subjects who had been diagnosed with autism or Asperger syndrome by a professional, the average score being 35 and 38 for males and females respectively.
If we take 7⁄11 times the 26% of LW who scored 32+, we get 16%, which is somewhat higher than the 7-10% you might estimate from the number of people who said they have diagnoses. Note, though, that the 7 trial students who were found to meet the diagnostic criteria were not diagnosed, as their condition was not causing them “distress”, indicating that for high-functioning AS adults, the incidence rate might be a lot higher than the diagnosis rate.
What does this mean?
Well, for one thing it means that Less Wrong is “on the spectrum”, even if we’re mostly not falling off the right tail. Only about 1 in 10 people on Less Wrong are “normal” in terms of the empathizing/systematizing scale, perhaps 1 in 10 are far enough out to be full blown Aspergers, and the rest of us sit somewhere in between, with most people being more to the right of the distribution than the average Cambridge mathematics student.
Interestingly, 48% of respondents ticked this criterion:
Severe impairment in reciprocal social interaction (at least two of the following) (a) inability to interact with peers, (b) lack of desire to interact with peers, (c) lack of appreciation of social cues, (d) socially and emotionally inappropriate behavior
Which indicates that we’re mostly not very good at the human social game.
EDIT: Note also that nonresponse bias means that these conclusions only apply strictly to that subset of LW who actually responded, i.e. those specific 82 people. Since “Less Wrong” is a vague collection of “concentric levels” of involvement, from occasional reader to hardcore poster, and those who are more heavily involved are more likely to have responded (e.g. because they read more of the posts, and have more time), the results probably apply more to those who are more involved.
Response bias could be counteracted by doing more work (e.g. asking only specific commenters, randomly selected, to respond), or by simply having a prior for response bias and AQ rates, and using the survey results to update it.
- Existential Risk and Public Relations by 15 Aug 2010 7:16 UTC; 41 points) (
- 28 Nov 2012 18:47 UTC; 15 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- Aspergers Survey Re-results by 29 May 2010 16:58 UTC; 11 points) (
- 7 Apr 2011 15:45 UTC; 11 points) 's comment on Autism and Lesswrong by (
- 18 May 2011 18:12 UTC; 8 points) 's comment on How and Why to Granularize by (
- 7 Apr 2011 15:49 UTC; 7 points) 's comment on Autism and Lesswrong by (
- 27 Sep 2011 23:34 UTC; 5 points) 's comment on Paid DC internship for autistics with technical skills who are recent college graduates by (
- 25 Apr 2011 18:45 UTC; 4 points) 's comment on Eye Reading Ability by (
- 10 Dec 2014 4:13 UTC; 2 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 14 Aug 2011 16:22 UTC; 1 point) 's comment on Take heed, for it is a trap by (
- 4 Oct 2011 1:53 UTC; 0 points) 's comment on [SEQ RERUN] Illusion of Transparency: Why No One Understands You by (
- 1 Feb 2013 8:24 UTC; 0 points) 's comment on The Zeroth Skillset by (
The bigger response bias problem comes from the fact that the survey was presented as a survey about Asperger’s, which means that people who are on the autism spectrum probably tended to be more interested in it and more likely to respond to the survey. For instance, a survey about sexual orientation at Cognitive Daily found that rates of homosexuality increased from about 5% of respondents when the sexuality question was tacked on to an unrelated survey to about 15% when the poll was advertised as being about sexual orientation.
Perhaps LW should develop a convention for having surveys where the purpose of the survey isn’t explained until after people have taken it.
OK, so if take the homosexuality survey as a data point, we can say that selection bias in a survey can increase the reported rate of a behavior by 3 (5% homosexual in normal population, 15% when the survey is about sexual orientation).
If we assume the same increased rate applies in this case, we can divide 5% by three and get 1.666%, still far above the normal 0.36% of the general population.
The ratio could easily be higher than 3:1. The Cognitive Daily survey had a one-question poll right there in the top post on the site’s home page, while this survey required following a link at the bottom of a long post about Asperger’s. That means that this sample was probably more filtered for those who are most motivated and interested in the topic, which will tend to make the sample more biased.
Somewhere around 1% of the people who visited Less Wrong took the poll (82 respondents when there are about 8,000 visitors/day), and it’s not out of the realm of possibility for the response rate among people with Asperger’s to have been 14%, which would be enough to account for the 14:1 ratio between the 5% rate of Asperger’s in the sample and the 0.36% rate in the general population.
It’s hard to estimate the size of a selection effect, especially when the response rate is so low, which makes it hard to undo the bias after the fact to figure out the actual rate. That’s why I’d recommend putting some convention in place for surveys that avoid selection effects as much as possible, if we’re going to try to use surveys to draw conclusions about the LW community.
Also, wanting to take a multiple choice test that outputs a numeric score is going to have a higher correlation with autism/Asperger’s than with homosexuality :)
Just out of interest, what’s your prediction of the sample mean AQ for those responses to my survey that are submitted after I put the note at the top of the post asking only previous non-responders to respond, assuming I get more than, say, 5?
It’s hard to say, since the new set of respondents could have the same selection bias. They would be people who 1) opened your “Do you have High-Functioning Asperger’s Syndrome?” post, 2) read the last paragraph of the post which mentioned the poll, 3) considered taking the poll but didn’t, 4) opened this post about the Asperger’s poll (or re-opened your previous post) and saw your request, and 5) decided to comply with your request and take the poll. Steps 1, 2, and 4 all seem likely to select for high-AQ people; steps 3 & 5 might shift your sample in the opposite direction but probably not as strongly (although the explicit concerns about nonresponse bias could make low-AQ people especially motivated to take the survey to counteract that bias).
To answer your question, I’ll guess that the new mean will be slightly lower than the first result but not by much. Let’s say a mean of 25 for your new sample, and a mean of 21 if we ever give this survey to a representative sample of the LW community (or at least a sample that doesn’t know that the survey is about Asperger’s).
The results of the new survey will, I am sure, be enlightening!
It’s a convenience sample!
This is grossly overstated, because the survey respondents are in no way known to be representative of LessWrong. In fact, I would suspect a very strong selection bias. Before you can go saying anything about the LessWrong community, you need some response statistics. How many people were offered the survey? (You could get this from the server logs. Your survey wasn’t promoted and was only up for a day or so, an exposure that already subsets its audience from “LessWrong”) How many of the people who saw the survey decided to fill it out? How do the people who filled out the survey differ from the people who did not fill out the survey? I doubt the answers to any of these questions would be satisfying unless you started with a random sample, then individually and privately invited each of the sampled people to participate. Even then, the response bias might be bad enough to preclude any conclusions about LessWrong.
This is a good point, and we should consider it if we design a poll feature for LW, for example the poll could be made available only to registered users, and could track which of them saw it, and perhaps even give Karma rewards to those users who complete it to ensure a higher response rate.
Just out of interest, what’s your prediction of the sample mean AQ for those responses to my survey that are submitted after I put the note at the top of the post asking only previous non-responders to respond, assuming I get more than, say, 5?
Based mostly on Unnamed’s comment regarding a poll on homosexuality, I expect it would be lower than what you found for self-selected interested respondents, but I don’t have much confidence in this, and I’m not familiar enough with AQ variance to say how much lower.
I have some experience with trying to do survey research right (i.e. in a manner that results in true conclusions about the population), and I have seen so many ways that convenience samples can make me wrong (non-response bias is just a start), that I generally don’t bother much with trying to correct for one particular bias. Instead, I stick to random samples or use estimators that can handle convenience samples. But I am genuinely intrigued by your idea (from the very end of your post) to use a prior on non-response bias and a prior on AQ distribution to estimate the true AQ distribution of LW users. This wouldn’t handle e.g. problems with question wording or lies in responses, but I would expect those problems to be small in comparison to the non-response bias. Could you elaborate, or point me to any survey methods papers that explain this approach in detail? Where would you get the priors?
Yes, this is a good point.
Note that it’s still evidence, in the sense that changing the results of the survey ought to change your opinions about the AS-spectrum average over the cluster of Less Wrong readers.
Of course there is a potentially huge response bias—according to sitemeter there were 3000 visitors today, meaning that the response bias could in theory be up to 60 x . (which means that with maximum response bias, i.e. all nonresponders are scoring an average of 16 on the AQ test and don’t have a diagnosis, LW could have a normal AS rate.)
various factors will affect who responds, but until you have a hypothesis about which way that bias points, all that does is add noise. I would predict that random sampling of people who ever look at the LW homepage would yield lower AS-spectrum results, but that random sampling of people who read regularly and/or comment probably wouldn’t.
Wouldn’t the maximum response bias be if all the non-responders scored 0 on the scale?
This is highly implausible since we didn’t get a single response below 10, but yes, technically that would be an even bigger bias.
I scored a 12, but was not inspired to actually take that test until reading this post. And I haven’t bothered putting up a response to the original post. People who are most likely to take that test seem like people who already suspect they may have AS to some extent. People who know that they very clearly don’t are unlikely to be interested enough to take it. So there is excellent reason to suspect that your sample is highly non-representative and bias in favor of overreporting.
Just out of interest, what’s your prediction of the sample mean AQ for those responses to my survey that are submitted after I put the note at the top of the post asking only previous non-responders to respond, assuming I get more than, say, 5?
With that small a sample, I’m not sure. Merely adding the criteria, “Respond if you haven’t responded yet” seems unlikely to draw particularly different people. It will just draw more people with the same characteristic who didn’t already vote, if it draws anyone.
One thing I noticed on the AQ test is a number of questions which don’t clearly relate to any of the direct symptoms, but are chosen apparently because these things correlate with Asperger’s/Autism.
For instance, the statement “I find numbers fascinating”. This may well correlate with Autism, but I bet it correlates even more strongly with an interest in math or math-heavy sciences. That math olympians nearly all answered yes to this question, doesn’t really tell us anything new about the correlation between math and AQ. The fact that this question is scored positively for AQ has built the correlation assumption into the test.
There are some less obvious examples such as “I see patterns all the time”. I would wild-ass guess that at least 2-3 points of the gap between the math heavy groups’ scores and the population average scores are represented by answers to questions that are not related directly to symptoms, but are personality/brain traits correlated with symptoms, which also happen to be highly useful for doing math.
True, but the fact remains that 7⁄11 = 64% of the students who scored 32+ were diagnosed with AS by psychiatrists, presumably that vindicates the assumptions, to the extent that you trust the DSM-IV diagnosis as a benchmark.
What the DSM says, how psychiatrists diagnose in the wild, and how psychiatrists diagnose in studies are three wildly different things. Practically unrelated.
Leave a line of retreat.
Hullo?
Edit: I don’t see the relevance of “Leave a line of retreat” to Douglas_Knight’s comment—I would like an explanation.
Sorry, inferential distance.
It seems to privilege the hypothesis to use the factoid of non-standardized DSM use to dismiss a relevant point based on best available evidence. Does Douglas_Knight have reason to believe such possible caveats with DSM use renders the point moot, because I consider it non-obvious that such a factoid completely abolishes Roko’s argument?
It seems flawed to counter a specific finding with a fairly large effect with a general critique of the technique without evidence that this particular example is likely to be biased by it.
IOW, what would Dougles_Knight’s response be if his factoid is either wrong, non-applicable or irrelevant?
It’s a conversation-stopper when used like here.
I am hoping to learn the population standard deviation for the AS quotient. No luck so far, but this paper has the following info: 100 people suspected of having Asperger’s took the test, and then were evaluated by two clinicians using the DSM-IV criteria. The 73 subsequently diagnosed with Asperger’s had a mean of 36.62 and std. dev. of 6.63. The 27 not diagnosed had a mean of 26.22 and std. dev. of 9.39. Needless to say, one should interpret these numbers carefully because of the selection into the study, and due to variation across clinicians in coming to the same diagnosis.
Please insert a section break near the start of this post, so the whole thing doesn’t show up on “NEW”.
Any idea what share of the general population would report “Severe impairment in reciprocal social interaction?
I guess it would be better to reword that question for the general population. Too many people would switch off when they saw the word “reciprocal.”
Anecdote showing how such questions can be highly misleading:
I know someone who has bipolar disorder. He said that online questionnaires or simple diagnostic statements like the one quoted above are often misinterpreted, that it’s the severity that matters most, and many people are not able to judge that properly.
He explained that many people who answer a bipolar diagnostics question like, “Do you often feel happy and then sad a short time later?” might think to themselves, “Hmm, just yesterday I was fine and then I got kind of down...” and so would answer yes to the binary question. However, for someone with true bipolar, he gave the example that, “Last week I worked 20 hours every day and founded 2 nonprofit organizations. This week I haven’t been outside at all and I tried to kill myself. Again.”
This sort of analysis should be done by someone with experience; the gap between ‘normal’ and ‘severe’ may be hard to fathom.
Another factor is that people who are willing to answer online questionnaires are almost certainly not randomly distributed.
Speaking of problems with multiple choice tests:
I suspect that IQ tests have some non-obvious biases: they select for mild ADD—a series of little puzzles which take so little time that (if you like that sort of puzzle), you don’t have time to get distracted.
They don’t address the ability to take on large projects or to find new questions.
They select against people who feel a need to only think about things which are directly connected to their goals.
I don’t know; though I would suspect that they would over-report, and that people with very high AQ scores would under-report. Self-reports are a crude measuring instrument.
Not sure if this has been mentioned before, but I think that this Aspie Quiz is more thorough than the one in Wired.
IMO these kinds of quizzes do have their drawbacks as many aspie traits can be compensated for with time and practice. Female aspie traits (or the special nuances in them) may also be overlooked.
I got a 12 on the AQ test, although I’m not that happy with all my responses. Feel free to ask me questions:
http://www.formspring.me/neurotypical
Doing this with a shill account so no one will know how egotistical I am. Don’t expect responses to messages directed at this LW account as I probably won’t log in to it again. For reference, my real Less Wrong account’s karma is in the 500-1000 range.
Sorry for not responding to questions earlier; my settings say I receive an email notification every time a question was asked… That didn’t really happen. I guess I’m not answering any more questions.
If anyone has a good prior for the average level of nonresponse bias in surveys of the form “Might you have X?”, post as a subcomment of this comment.
I originally looked at the poll but didn’t answer until now.
I fit two of the Gillberg diagnostic criteria and scored 19 on the wired test. I would’ve definitely scored much higher a few years back when I suspected I might have asperger’s. My social skills have developed a lot since then and I’m now more inclined to attribute my social deficiencies to lack of practice than anything else.
For what it’s worth, I do seem to follow a pattern of intense pursuit of relatively few interests that change over time.
Huh. It is interesting that you scored only 19 (almost 8 lower than me and pretty close to the mean) and still checked two of the Gillberg criteria (they seemed much too strongly put to describe me).
It becomes a bit less surprising when you consider that I attribute my low score mostly to relatively recent changes in my social skills and preferences, and the criteria I checked were the ones about all-absorbing narrow interests and imposition of routines and interests. As a matter of fact, the changes I mention came about as a result of months of near-obsessive study and accidental practice. (I did not grasp the importance of practice at the time but that’s a subject for another comment.)
Maybe I wasn’t that far toward the autism end of the spectrum to begin with but it does make me wonder just how much others could improve their social skills given the right circumstances.
I’d love to hear more about this.
In short, I used to believe that social skills are a talent you’re born with, not a skill to be developed. Luckily just being around people and paying attention improved my eye for social cues enough that I eventually noticed.
This relates to Carol S. Dweck’s book Mindset, which I’ve mentioned before. I’m thinking of writing more about it sometime soon.
Commenting in response to the edit…
I took the Wired quiz earlier but didn’t actually fill in the poll at the time. Sorry about that. I’ve done so now.
Remarks: I scored a 27 on the quiz, but couldn’t honestly check any of the four diagnostic criteria. I lack many distinctive autism-spectrum characteristics (possibly to the extent of being on the other side of baseline), but have a distinctly introverted/antisocial disposition.
Saw the survey earlier and skipped it, just took it and scored 25 on the Wired test, which I see is below the convenience sample’s 27 average.
(Also, 82 responses in the original batch?! I didn’t think there were so many regular posters here.)
There’s not a little traffic to this blog.
And even these measurements are undercounts, because some people worried about privacy block traffic trackers like SiteMeter.