2012 Survey Results
Thank you to everyone who took the 2012 Less Wrong Survey (the survey is now closed. Do not try to take it.) Below the cut, this post contains the basic survey results, a few more complicated analyses, and the data available for download so you can explore it further on your own. You may want to compare these to the results of the 2011 Less Wrong Survey.
Part 1: Population
How many of us are there?
The short answer is that I don’t know.
The 2011 survey ran 33 days and collected 1090 responses. This year’s survey ran 23 days and collected 1195 responses. The average number of new responses during the last week was about five per day, so even if I had kept this survey open as long as the last one I probably wouldn’t have gotten more than about 1250 responses. That means at most a 15% year on year growth rate, which is pretty abysmal compared to the 650% growth rate in two years we saw last time.
About half of these responses were from lurkers; over half of the non-lurker remainder had commented but never posted to Main or Discussion. That means there were only about 600 non-lurkers.
But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn’t know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn’t see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers.
The question of “how quickly is LW growing” is also complicated by the high turnover. Over half the people who took this survey said they hadn’t participated in the survey last year. I tried to break this down by combining a few sources of information, and I think our 1200 respondents include 500 people who took last year’s survey, 400 people who were around last year but didn’t take the survey for some reason, and 300 new people.
As expected, there’s lower turnover among regulars than among lurkers. Of people who have posted in Main, about 75% took the survey last year; of people who only lurked, about 75% hadn’t.
This view of a very high-turnover community and lots of people not taking the survey is consistent with Vladimir Nesov’s data showing http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/77xz 1390 people who have written at least ten comments. But the survey includes only about 600 people who have at least commented; 800ish of Vladimir’s accounts are either gone or didn’t take the census.
Part 2: Categorical Data
SEX:
Man: 1057, 89.2%
Woman: 120, 10.1%
Other: 2, 0.2%)
No answer: 6, 0.5%
GENDER:
M (cis): 1021, 86.2%
F (cis): 105, 8.9%
M (trans f->m): 3, 0.3%
F (trans m->f): 16, 1.3%
Other: 29, 2.4%
No answer: 11, 0.9%
ORIENTATION:
Heterosexual: 964, 80.7%
Bisexual: 135, 11.4%
Homosexual: 28, 2.4%
Asexual: 24, 2%
Other: 28, 2.4%
No answer: 14, 1.2%
RELATIONSHIP STYLE:
Prefer monogamous: 639, 53.9%
Prefer polyamorous: 155, 13.1%
Uncertain/no preference: 358, 30.2%
Other: 21, 1.8%
No answer: 12, 1%
NUMBER OF CURRENT PARTNERS:
0: 591, 49.8%
1: 519, 43.8%
2: 34, 2.9%
3: 12, 1%
4: 5, 0.4%
6: 1, 0.1%
7, 1, 0.1% (and this person added “really, not trolling”)
Confusing or no answer: 20, 1.8%
RELATIONSHIP STATUS:
Single: 628, 53%
Relationship: 323, 27.3%
Married: 220, 18.6%
No answer: 14, 1.2%
RELATIONSHIP GOALS:
Not looking for more partners: 707, 59.7%
Looking for more partners: 458, 38.6%
No answer: 20, 1.7%
COUNTRY:
USA: 651, 54.9%
UK: 103, 8.7%
Canada: 74, 6.2%
Australia: 59, 5%
Germany: 54, 4.6%
Israel: 15, 1.3%
Finland: 15, 1.3%
Russia: 13, 1.1%
Poland: 12, 1%
These are all the countries with greater than 1% of Less Wrongers, but other, more exotic locales included Kenya, Pakistan, and Iceland, with one user each. You can see the full table here.
This data also allows us to calculate Less Wrongers per capita:
Finland: 1⁄366,666
Australia: 1⁄389,830
Canada: 1⁄472,972
USA: 1⁄483,870
Israel: 1⁄533,333
UK: 1⁄603,883
Germany: 1⁄1,518,518
Poland: 1⁄3,166,666
Russia: 1⁄11,538,462
RACE:
White, non-Hispanic 1003, 84.6%
East Asian: 50, 4.2%
Hispanic 47, 4.0%
Indian Subcontinental 28, 2.4%
Black 8, 0.7%
Middle Eastern 4, 0.3%
Other: 33, 2.8%
No answer: 12, 1%
WORK STATUS:
Student: 476, 40.7%
For-profit work: 364, 30.7%
Self-employed: 95, 8%
Unemployed: 81, 6.8%
Academics (teaching): 54, 4.6%
Government: 46, 3.9%
Non-profit: 44, 3.7%
Independently wealthy: 12, 1%
No answer: 13, 1.1%
PROFESSION:
Computers (practical): 344, 29%
Math: 109, 9.2%
Engineering: 98, 8.3%
Computers (academic): 72, 6.1%
Physics: 66, 5.6%
Finance/Econ: 65, 5.5%
Computers (AI): 39, 3.3%
Philosophy: 36, 3%
Psychology: 25, 2.1%
Business: 23, 1.9%
Art: 22, 1.9%
Law: 21, 1.8%
Neuroscience: 19, 1.6%
Medicine: 15, 1.3%
Other social science: 24, 2%
Other hard science: 20, 1.7%
Other: 123, 10.4%
No answer: 27, 2.3%
DEGREE:
Bachelor’s: 438, 37%
High school: 333, 28.1%
Master’s: 192, 16.2%
Ph.D: 71, 6%
2-year: 43, 3.6%
MD/JD/professional: 24, 2%
None: 55, 4.6%
Other: 15, 1.3%
No answer: 14, 1.2%
POLITICS:
Liberal: 427, 36%
Libertarian: 359, 30.3%
Socialist: 326, 27.5%
Conservative: 35, 3%
Communist: 8, 0.7%
No answer: 30, 2.5%
You can see the exact definitions given for each of these terms on the survey.
RELIGIOUS VIEWS:
Atheist, not spiritual: 880, 74.3%
Atheist, spiritual: 107, 9.0%
Agnostic: 94, 7.9%
Committed theist: 37, 3.1%
Lukewarm theist: 27, 2.3%
Deist/Pantheist/etc: 23, 1.9%
No answer: 17, 1.4%
FAMILY RELIGIOUS VIEWS:
Lukewarm theist: 392, 33.1%
Committed theist: 307, 25.9%
Atheist, not spiritual: 161, 13.6
Agnostic: 149, 12.6%
Atheist, spiritual: 46, 3.9%
Deist/Pantheist/Etc: 32, 2.7%
Other: 84, 7.1%
RELIGIOUS BACKGROUND:
Other Christian: 517, 43.6%
Catholic: 295, 24.9%
Jewish: 100, 8.4%
Hindu: 21, 1.8%
Traditional Chinese: 17, 1.4%
Mormon: 15, 1.3%
Muslim: 12, 1%
Raw data is available here.
MORAL VIEWS:
Consequentialism: 735, 62%
Virtue Ethics: 166, 14%
Deontology: 50, 4.2%
Other: 214, 18.1%
No answer: 20, 1.7%
NUMBER OF CHILDREN
0: 1044, 88.1%
1: 51, 4.3%
2: 48, 4.1%
3: 19, 1.6%
4: 3, 0.3%
5: 2, 0.2%
6: 1, 0.1%
No answer: 17, 1.4%
WANT MORE CHILDREN?
No: 438, 37%
Maybe: 363, 30.7%
Yes: 366, 30.9%
No answer: 16, 1.4%
LESS WRONG USE:
Lurkers (no account): 407, 34.4%
Lurkers (with account): 138, 11.7%
Posters (comments only): 356, 30.1%
Posters (comments + Discussion only): 164, 13.9%
Posters (including Main): 102, 8.6%
SEQUENCES:
Never knew they existed until this moment: 99, 8.4%
Knew they existed; never looked at them: 23, 1.9%
Read < 25%: 227, 19.2%
Read ~ 25%: 145, 12.3%
Read ~ 50%: 164, 13.9%
Read ~ 75%: 203, 17.2%
Read ~ all: 306, 24.9%
No answer: 16, 1.4%
Dear 8.4% of people: there is this collection of old blog posts called the Sequences. It is by Eliezer, the same guy who wrote Harry Potter and the Methods of Rationality. It is really good! If you read it, you will understand what we’re talking about much better!
REFERRALS:
Been here since Overcoming Bias: 265, 22.4%
Referred by a link on another blog: 23.5%
Referred by a friend: 147, 12.4%
Referred by HPMOR: 262, 22.1%
No answer: 35, 3%
BLOG REFERRALS:
Common Sense Atheism: 20 people
Hacker News: 20 people
Reddit: 15 people
Unequally Yoked: 7 people
TV Tropes: 7 people
Marginal Revolution: 6 people
gwern.net: 5 people
RationalWiki: 4 people
Shtetl-Optimized: 4 people
XKCD fora: 3 people
Accelerating Future: 3 people
These are all the sites that referred at least three people in a way that was obvious to disentangle from the raw data. You can see a more complete list, including the long tail, here.
MEETUPS:
Never been to one: 834, 70.5%
Have been to one: 320, 27%
No answer: 29, 2.5%
CATASTROPHE:
Pandemic (bioengineered): 272, 23%
Environmental collapse: 171, 14.5%
Unfriendly AI: 160, 13.5%
Nuclear war: 155, 13.1%
Economic/Political collapse: 137, 11.6%
Pandemic (natural): 99, 8.4%
Nanotech: 49, 4.1%
Asteroid: 43, 3.6%
The wording of this question was “which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?”
CRYONICS STATUS:
No, don’t want to: 275, 23.2%
No, still thinking: 472, 39.9%
No, procrastinating: 178, 15%
No, unavailable: 120, 10.1%
Yes, signed up: 44, 3.7%
Never thought about it: 46, 3.9%
No answer: 48, 4.1%
VEGETARIAN:
No: 906, 76.6%
Yes: 147, 12.4%
No answer: 130, 11%
For comparison, 3.2% of US adults are vegetarian.
SPACED REPETITION SYSTEMS
Don’t use them: 511, 43.2%
Do use them: 235, 19.9%
Never heard of them: 302, 25.5%
Dear 25.5% of people: spaced repetition systems are nifty, mostly free computer programs that allow you to study and memorize facts more efficiently. See for example http://ankisrs.net/
HPMOR:
Never read it: 219, 18.5%
Started, haven’t finished: 190, 16.1%
Read all of it so far: 659, 55.7%
Dear 18.5% of people: Harry Potter and the Methods of Rationality is a Harry Potter fanfic about rational thinking written by Eliezer Yudkowsky (the guy who started this site). It’s really good. You can find it at http://www.hpmor.com/.
ALTERNATIVE POLITICS QUESTION:
Progressive: 429, 36.3%
Libertarian: 278, 23.5%
Reactionary: 30, 2.5%
Conservative: 24, 2%
Communist: 22, 1.9%
Other: 156, 13.2%
ALTERNATIVE ALTERNATIVE POLITICS QUESTION:
Left-Libertarian: 102, 8.6%
Progressive: 98, 8.3%
Libertarian: 91, 7.7%
Pragmatist: 85, 7.2%
Social Democrat: 80, 6.8%
Socialist: 66, 5.6%
Anarchist: 50, 4.1%
Futarchist: 29, 2.5%
Moderate: 18, 1.5%
Moldbuggian: 19, 1.6%
Objectivist: 11, 0.9%
These are the only ones that had more than ten people. Other responses notable for their unusualness were Monarchist (5 people), fascist (3 people, plus one who was up for fascism but only if he could be the leader), conservative (9 people), and a bunch of people telling me politics was stupid and I should feel bad for asking the question. You can see the full table here.
CAFFEINE:
Never: 162, 13.7%
Rarely: 237, 20%
At least 1x/week: 207, 17.5
Daily: 448, 37.9
No answer: 129, 10.9%
SMOKING:
Never: 896, 75.7%
Used to: 1-5, 8.9%
Still do: 51, 4.3%
No answer: 131, 11.1%
For comparison, about 28.4% of the US adult population smokes
NICOTINE (OTHER THAN SMOKING):
Never used: 916, 77.4%
Rarely use: 82, 6.9%
>1x/month: 32, 2.7%
Every day: 14, 1.2%
No answer: 139, 11.7%
MODAFINIL:
Never: 76.5%
Rarely: 78, 6.6%
>1x/month: 48, 4.1%
Every day: 9, 0.8%
No answer: 143, 12.1%
TRUE PRISONERS’ DILEMMA:
Defect: 341, 28.8%
Cooperate: 316, 26.7%
Not sure: 297, 25.1%
No answer: 229, 19.4%
FREE WILL:
Not confused: 655, 55.4%
Somewhat confused: 296, 25%
Confused: 81, 6.8%
No answer: 151, 12.8%
TORTURE VS. DUST SPECKS
Choose dust specks: 435, 36.8%
Choose torture: 261, 22.1%
Not sure: 225, 19%
Don’t understand: 22, 1.9%
No answer: 240, 20.3%
SCHRODINGER EQUATION:
Can’t calculate it: 855, 72.3%
Can calculate it: 175, 14.8%
No answer: 153, 12.9%
PRIMARY LANGUAGE:
English: 797, 67.3%
German: 54, 4.5%
French: 13, 1.1%
Finnish: 11, 0.9%
Dutch: 10, 0.9%
Russian: 15, 1.3%
Portuguese: 10, 0.9%
These are all the languages with ten or more speakers, but we also have everything from Marathi to Tibetan. You can see the full table here..
NEWCOMB’S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don’t understand: 86, 7.3%
No answer: 240, 20.3%
ENTREPRENEUR:
Don’t want to start business: 447, 37.8%
Considering starting business: 334, 28.2%
Planning to start business: 96, 8.1%
Already started business: 112, 9.5%
No answer: 194, 16.4%
ANONYMITY:
Post using real name: 213, 18%
Easy to find real name: 256, 21.6%
Hard to find name, but wouldn’t bother me if someone did: 310, 26.2%
Anonymity is very important: 170, 14.4%
No answer: 234, 19.8%
HAVE YOU TAKEN A PREVIOUS LW SURVEY?
No: 559, 47.3%
Yes: 458, 38.7%
No answer: 116, 14%
TROLL TOLL POLICY:
Disapprove: 194, 16.4%
Approve: 178, 15%
Haven’t heard of this: 375, 31.7%
No opinion: 249, 21%
No answer: 187, 15.8%
MYERS-BRIGGS
INTJ: 163, 13.8%
INTP: 143, 12.1%
ENTJ: 35, 3%
ENTP: 30, 2.5%
INFP: 26, 2.2%
INFJ: 25. 2.1%
ISTJ: 14, 1.2%
No answer: 715, 60%
This includes all types with greater than 10 people. You can see the full table here.
Part 3: Numerical Data
Except where indicated otherwise, all the numbers below are given in the format:
mean+standard_deviation (25% level, 50% level/median, 75% level) [n = number of data points]
INTELLIGENCE:
IQ (self-reported): 138.7 + 12.7 (130, 138, 145) [n = 382]
SAT (out of 1600): 1485.8 + 105.9 (1439, 1510, 1570) [n = 321]
SAT (out of 2400): 2319.5 + 1433.7 (2155, 2240, 2320)
ACT: 32.7 + 2.3 (31, 33, 34) [n = 207]
IQ (on iqtest.dk): 125.63 + 13.4 (118, 130, 133) [n = 378]
I am going to harp on these numbers because in the past some people have been pretty quick to ridicule this survey’s intelligence numbers as completely useless and impossible and so on.
According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143).
So if we consider self-report, SAT, ACT, and iqtest.dk as four measures of IQ, these come out to 139, 144, 143, and 126, respectively.
All of these are pretty close except iqtest.dk. I ran a correlation between all of them and found that self-reported IQ is correlated with SAT scores at the 1% level and iqtest.dk at the 5% level, but SAT scores and IQTest.dk are not correlated with each other.
Of all these, I am least likely to trust iqtest.dk. First, it’s a random Internet IQ test. Second, it correlates poorly with the other measures. Third, a lot of people have complained in the comments to the survey post that it exhibits some weird behavior.
But iqtest.dk gave us the lowest number! And even it said the average was 125 to 130! So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn’t as terrible a measure as one might think.
AGE:
27.8 + 9.2 (22, 26, 31) [n = 1185]
LESS WRONG USE:
Karma: 1078 + 2939.5 (0, 4.5, 136) [n = 1078]
Months on LW: 26.7 + 20.1 (12, 24, 40) [n = 1070]
Minutes/day on LW: 19.05 + 24.1 (5, 10, 20) [n = 1105]
Wiki views/month: 3.6 + 6.3 (0, 1, 5) [n = 984]
Wiki edits/month: 0.1 + 0.8 (0, 0, 0) [n = 984]
PROBABILITIES:
Many Worlds: 51.6 + 31.2 (25, 55, 80) [n = 1005]
Aliens (universe): 74.2 + 32.6 (50, 90, 99) [n = 1090]
Aliens (galaxy): 42.1 + 38 (5, 33, 80) [n = 1081]
Supernatural: 5.9 + 18.6 (0, 0, 1) [n = 1095]
God: 6 + 18.7 (0, 0, 1) [n = 1098]
Religion: 3.8 + 15.5 (0, 0, 0.8) [n = 1113]
Cryonics: 18.5 + 24.8 (2, 8, 25) [n = 1100]
Antiagathics: 25.1 + 28.6 (1, 10, 35) [n = 1094]
Simulation: 25.1 + 29.7 (1, 10, 50) [n = 1039]
Global warming: 79.1 + 25 (75, 90, 97) [n = 1112]
No catastrophic risk: 71.1 + 25.5 (55, 80, 90) [n = 1095]
Space: 20.1 + 27.5 (1, 5, 30) [n = 953]
CALIBRATION:
Year of Bayes’ birth: 1767.5 + 109.1 (1710, 1780, 1830) [n = 1105]
Confidence: 33.6 + 23.6 (20, 30, 50) [n= 1082]
MONEY:
Income/year: 50,913 + 60644.6 (12000, 35000, 74750) [n = 644]
Charity/year: 444.1 + 1152.4 (0, 30, 250) [n = 950]
SIAI/CFAR charity/year: 309.3 + 3921 (0, 0, 0) [n = 961]
Aging charity/year: 13 + 184.9 (0, 0, 0) [n = 953]
TIME USE:
Hours online/week: 42.4 + 30 (21, 40, 59) [n = 944]
Hours reading/week: 30.8 + 19.6 (18, 28, 40) [n = 957]
Hours writing/week: 7.9 + 9.8 (2, 5, 10) [n = 951]
POLITICAL COMPASS:
Left/Right: −2.4 + 4 (-5.5, −3.4, −0.3) [n = 476]
Libertarian/Authoritarian: −5 + 2 (-6.2, −5.2, −4)
BIG 5 PERSONALITY TEST:
Big 5 (O): 60.6 + 25.7 (41, 65, 84) [n = 453]
Big 5 (C): 35.2 + 27.5 (10, 30, 58) [n = 453]
Big 5 (E): 30.3 + 26.7 (7, 22, 48) [n = 454]
Big 5 (A): 41 + 28.3 (17, 38, 63) [n = 453]
Big 5 (N): 36.6 + 29 (11, 27, 60) [n = 449]
These scores are in percentiles, so LWers are more Open, but less Conscientious, Agreeable, Extraverted, and Neurotic than average test-takers. Note that people who take online psychometric tests are probably a pretty skewed category already so this tells us nothing. Also, several people got confusing results on this test or found it different than other tests that they took, and I am pretty unsatisfied with it and don’t trust the results.
AUTISM QUOTIENT
AQ: 24.1 + 12.2 (17, 24, 30) [n = 367]
This test says the average control subject got 16.4 and 80% of those diagnosed with autism spectrum disorders get 32+ (which of course doesn’t tell us what percent of people above 32 have autism...). If we trust them, most LWers are more autistic than average.
CALIBRATION:
Reverend Thomas Bayes was born in 1701. Survey takers were asked to guess this date within 20 years, so anyone who guessed between 1681 and 1721 was recorded as getting a correct answer. The percent of people who answered correctly is recorded below, stratified by the confidence they gave of having guessed correctly and with the number of people at that confidence level.
0-5: 10% [n = 30]
5-15: 14.8% [n = 183]
15-25: 10.3% [n = 242]
25-35: 10.7% [n = 225]
35-45: 11.2% [n = 98]
45-55: 17% [n = 118]
55-65: 20.1% [n = 62]
65-75: 26.4% [n = 34]
75-85: 36.4% [n = 33]
85-95: 60.2% [n = 20]
95-100: 85.7% [n = 23]
Here’s a classic calibration chart. The blue line is perfect calibration. The orange line is you guys. And the yellow line is average calibration from an experiment I did with untrained subjects a few years ago (which of course was based on different questions and so not directly comparable).
The results are atrocious; when Less Wrongers are 50% certain, they only have about a 17% chance of being correct. On this problem, at least, they are as bad or worse at avoiding overconfidence bias as the general population.
My hope was that this was the result of a lot of lurkers who don’t know what they’re doing stumbling upon the survey and making everyone else look bad, so I ran a second analysis. This one used only the numbers of people who had been in the community at least 2 years and accumulated at least 100 karma; this limited my sample size to about 210 people.
I’m not going to post exact results, because I made some minor mistakes which means they’re off by a percentage point or two, but the general trend was that they looked exactly like the results above: atrocious. If there is some core of elites who are less biased than the general population, they are well past the 100 karma point and probably too rare to feel confident even detecting at this kind of a sample size.
I really have no idea what went so wrong. Last year’s results were pretty good—encouraging, even. I wonder if it’s just an especially bad question. Bayesian statistics is pretty new; one would expect Bayes to have been born in rather more modern times. It’s also possible that I’ve handled the statistics wrong on this one; I wouldn’t mind someone double-checking my work.
Or we could just be really horrible. If we haven’t even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? Some remedial time at PredictionBook might be in order.
HYPOTHESIS TESTING:
I tested a very few of the possible hypothesis that were proposed in the survey design threads.
Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one’s probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people’s probability of MWI varies with understanding of quantum mechanics.
Are there any interesting biological correlates of IQ? We run a correlation between self-reported IQ, height, maternal age, and paternal age. The correlations are in the expected direction but not significant.
Are there differences in the ways men and women interact with the community? I had sort of vaguely gotten the impression that women were proportionally younger, newer to the community, and more likely to be referred via HPMOR. The average age of women on LW is 27.6 compared to 27.7 for men; obviously this difference is not significant. 14% of the people referred via HPMOR were women compared to about 10% of the community at large, but this difference is pretty minor. Women were on average newer to the community − 21 months vs. 39 for men—but to my surprise a t-test was unable to declare this significant. Maybe I’m doing it wrong?
Does the amount of time spent in the community affect one’s beliefs in the same way as in previous surveys? I ran some correlations and found that it does. People who have been around longer continue to be more likely to believe in MWI, less likely to believe in aliens in the universe (though not in our galaxy), and less likely to believe in God (though not religion). There was no effect on cryonics this time.
In addition, the classic correlations between different beliefs continue to hold true. There is an obvious cluster of God, religion, and the supernatural. There’s also a scifi cluster of cryonics, antiagathics, MWI, aliens, and the Simulation Hypothesis, and catastrophic risk (this also seems to include global warming, for some reason).
Are there any differences between men and women in regards to their belief in these clusters? We run a t-test between men and women. Men and women have about the same probability of God (men: 5.9, women: 6.2, p = .86) and similar results for the rest of the religion cluster, but men have much higher beliefs in for example antiagathics (men 24.3, women: 10.5, p < .001) and the rest of the scifi cluster.
DESCRIPTIONS OF LESS WRONG
Survey users were asked to submit a description of Less Wrong in 140 characters or less. I’m not going to post all of them, but here is a representative sample:
- “Probably the most sensible philosophical resource avaialble.”
- “Contains the great Sequences, some of Luke’s posts, and very little else.”
- “The currently most interesting site I found ont the net.”
- “EY cult”
- “How to think correctly, precisely, and efficiently.”
- “HN for even bigger nerds.”
- “Social skills philosophy and AI theorists on the same site, not noticing each other.”
- “Cool place. Any others like it?”
- “How to avoid predictable pitfalls in human psychology, and understand hard things well: The Website.”
- “A bunch of people trying to make sense of the wold through their own lens, which happens to be one of calculation and rigor”
- “Nice.”
- “A font of brilliant and unconventional wisdom.”
- “One of the few sane places on Earth.”
- “Robot god apocalypse cult spinoff from Harry Potter.”
- “A place to converse with intelligent, reasonably open-minded people.”
- “Callahan’s Crosstime Saloon”
- “Amazing rational transhumanist calming addicting Super Reddit”
- “Still wrong”
- “A forum for helping to train people to be more rational”
- “A very bright community interested in amateur ethical philosophy, mathematics, and decision theory.”
- “Dying. Social games and bullshit now >50% of LW content.”
- “The good kind of strange, addictive, so much to read!”
- “Part genuinely useful, part mental masturbation.”
- “Mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents.”
- “Less Wrong: Saving the world with MIND POWERS!”
- “Perfectly patternmatches the ‘young-people-with-all-the-answers’ cliche”
- “Rationalist community dedicated to self-improvement.”
- “Sperglord hipsters pretending that being a sperglord hipster is cool.” (this person’s Autism Quotient was two points higher than LW average, by the way)
- “An interesting perspective and valuable database of mental techniques.”
- “A website with kernels of information hidden among aspy nonsense.”
- “Exclusive, elitist, interesting, potentially useful, personal depression trigger.”
- “A group blog about rationality and related topics. Tends to be overzealous about cryogenics and other pet ideas of Eliezer Yudkowsky.”
- “Things to read to make you think better.”
- “Excellent rationality. New-age self-help. Worrying groupthink.”
- “Not a cult at all.”
- “A cult.”
- “The new thing for people who would have been Randian Objectivists 30 years ago.”
- “Fascinating, well-started, risking bloat and failure modes, best as archive.”
- “A fun, insightful discussion of probability theory and cognition.”
- “More interesting than useful.”
- “The most productive and accessible mind-fuckery on the Internet.”
- “A blog for rationality, cognitive bias, futurism, and the Singularity.”
- “Robo-Protestants attempting natural theology.”
- “Orderly quagmire of tantalizing ideas drawn from disagreeable priors.”
- “Analyze everything. And I do mean everything. Including analysis. Especially analysis. And analysis of analysis.”
- “Very interesting and sometimes useful.”
- “Where people discuss and try to implement ways that humans can make their values, actions, and beliefs more internally consistent.”
- “Eliezer Yudkowsky personality cult.”
- “It’s like the Mormons would be if everyone were an atheist and good at math and didn’t abstain from substances.”
- “Seems wacky at first, but gradually begins to seem normal.”
- “A varied group of people interested in philosophy with high Openness and a methodical yet amateur approach.”
- “Less Wrong is where human algorithms go to debug themselves.”
- “They’re kind of like a cult, but that doesn’t make them wrong.”
- “A community blog devoted to nerds who think they’re smarter than everyone else.”
- “90% sane! A new record!”
- “The Sequences are great. LW now slowly degenerating to just another science forum.”
- “The meetup groups are where it’s at, it seems to me. I reserve judgment till I attend one.”
- “All I really know about it is this long survey I took.”
- “The royal road of rationality.”
- “Technically correct: The best kind of correct!”
- “Full of angry privilege.”
- “A sinister instrument of billionaire Peter Thiel.”
- “Dangerous apocalypse cult bent on the systematic erasure of traditional values and culture by any means necessary.”
- “Often interesting, but I never feel at home.”
- “One of the few places I truly feel at home, knowing that there are more people like me.”
- “Currently the best internet source of information-dense material regarding cog sci, debiasing, and existential risk.”
- “Prolific and erudite writing on practical techniques to enhance the effectiveness of our reason.”
- “An embarrassing Internet community formed around some genuinely great blog writings.”
- “I bookmarked it a while ago and completely forgot what it is about. I am taking the survey to while away my insomnia.”
- “A somewhat intimidating but really interesting website that helps refine rational thinking.”
- “A great collection of ways to avoid systematic bias and come to true and useful conclusions.”
- “Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt.”
- “The cutting edge of human rationality.”
- “A purveyor of exceedingly long surveys.”
PUBLIC RELEASE
That last commenter was right. This survey had vastly more data than any previous incarnation; although there are many more analyses I would like to run I am pretty exhausted and I know people are anxious for the results. I’m going to let CFAR analyze and report on their questions, but the rest should be a community effort. So I’m releasing the survey to everyone in the hopes of getting more information out of it. If you find something interesting you can either post it in the comments or start a new thread somewhere.
The data I’m providing is the raw data EXCEPT:
- I deleted a few categories that I removed halfway through the survey for various reasons
- I deleted 9 entries that were duplicates of other entries, ie someone pressed ‘submit’ twice.
- I deleted the timestamp, which would have made people extra-identifiable, and sorted people by their CFAR random number to remove time order information.
- I removed one person whose information all came out as weird symbols.
- I numeralized some of the non-numeric data, especially on the number of months in community question. This is not the version I cleaned up fully, so you will get to experience some of the same pleasure I did working with the rest.
- I deleted 117 people who either didn’t answer the privacy question or who asked me to keep them anonymous, leaving 1067 people.
Here it is: Data in .csv format , Data in Excel format
- 2012: Year in Review by 3 Jan 2013 11:56 UTC; 62 points) (
- Only You Can Prevent Your Mind From Getting Killed By Politics by 26 Oct 2013 13:59 UTC; 61 points) (
- Participation in the LW Community Associated with Less Bias by 9 Dec 2012 12:15 UTC; 56 points) (
- 2013 Census/Survey: call for changes and additions by 5 Nov 2013 3:10 UTC; 47 points) (
- Cryonics Cost-Benefit Analysis by 3 Aug 2020 17:30 UTC; 46 points) (
- Making Rationality General-Interest by 24 Jul 2013 22:02 UTC; 45 points) (
- LessWrong Survey Results: Do Ethical Theories Affect Behavior? by 19 Dec 2012 8:40 UTC; 35 points) (
- Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas by 12 Jun 2016 7:38 UTC; 34 points) (
- 6 Jan 2014 19:27 UTC; 31 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- Happiness and Goodness as Universal Terminal Virtues by 21 Apr 2015 16:42 UTC; 30 points) (
- Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? by 19 Jun 2013 1:55 UTC; 28 points) (
- 6 Jan 2014 18:48 UTC; 26 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- 5 Jan 2015 15:10 UTC; 22 points) 's comment on 2014 Survey Results by (
- 19 Jan 2014 4:30 UTC; 22 points) 's comment on 2013 Survey Results by (
- 3 Jun 2013 12:51 UTC; 20 points) 's comment on Open Thread, June 2-15, 2013 by (
- Programming Thread by 6 Dec 2012 19:07 UTC; 18 points) (
- 19 Jan 2014 4:36 UTC; 18 points) 's comment on 2013 Survey Results by (
- Normative uncertainty in Newcomb’s problem by 16 Jun 2013 2:16 UTC; 14 points) (
- 19 Feb 2014 3:54 UTC; 14 points) 's comment on Open Thread for February 18-24 2014 by (
- 26 Nov 2012 21:21 UTC; 13 points) 's comment on 2012 Less Wrong Census/Survey by (
- 31 Jul 2013 22:47 UTC; 13 points) 's comment on More “Stupid” Questions by (
- 8 Apr 2013 16:54 UTC; 13 points) 's comment on The Universal Medical Journal Article Error by (
- 25 Oct 2013 22:45 UTC; 12 points) 's comment on Less Wrong’s political bias by (
- 31 Dec 2012 13:49 UTC; 11 points) 's comment on [Link] Economists’ views differ by gender by (
- 15 Sep 2013 3:08 UTC; 11 points) 's comment on College courses versus LessWrong by (
- 22 Mar 2013 5:24 UTC; 11 points) 's comment on Another community about existential risk—Arctic news by (
- 1 Jan 2013 14:15 UTC; 11 points) 's comment on Open Thread, January 1-15, 2013 by (
- 26 Apr 2013 8:19 UTC; 10 points) 's comment on Open Thread, April 15-30, 2013 by (
- 30 Nov 2012 2:04 UTC; 9 points) 's comment on Open Thread, October 1-15, 2012 by (
- 8 Oct 2013 23:16 UTC; 8 points) 's comment on The best 15 words by (
- 31 Jul 2013 22:43 UTC; 8 points) 's comment on More “Stupid” Questions by (
- 3 Jun 2013 13:16 UTC; 7 points) 's comment on Open Thread, June 2-15, 2013 by (
- 19 Jun 2021 5:38 UTC; 7 points) 's comment on Why did no LessWrong discourse on gain of function research develop in 2013/2014? by (
- 6 Dec 2012 21:51 UTC; 6 points) 's comment on Programming Thread by (
- 17 Jan 2014 12:41 UTC; 6 points) 's comment on Community bias in threat evaluation by (
- 18 May 2013 17:35 UTC; 5 points) 's comment on Open thread, May 17-31 2013 by (
- 26 Nov 2013 21:20 UTC; 5 points) 's comment on Only You Can Prevent Your Mind From Getting Killed By Politics by (
- 18 Jun 2013 10:08 UTC; 5 points) 's comment on Open Thread, June 16-30, 2013 by (
- 27 Oct 2013 7:08 UTC; 5 points) 's comment on What should normal people do? by (
- 18 Jan 2013 6:23 UTC; 5 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 4 Aug 2013 3:35 UTC; 5 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 2 Aug 2013 11:58 UTC; 5 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 18 Feb 2015 20:14 UTC; 4 points) 's comment on Innate Mathematical Ability by (
- 18 May 2013 17:36 UTC; 3 points) 's comment on Open thread, May 17-31 2013 by (
- 28 Oct 2014 15:09 UTC; 3 points) 's comment on Stupid Questions (10/27/2014) by (
- 11 Jan 2014 10:47 UTC; 3 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- 28 Mar 2018 22:53 UTC; 3 points) 's comment on The master skill of matching map and territory by (
- 4 Jan 2014 13:27 UTC; 3 points) 's comment on New Year’s Prediction Thread (2014) by (
- 27 Sep 2013 0:05 UTC; 3 points) 's comment on 2012 Survey Results by (
- 30 Apr 2013 0:11 UTC; 2 points) 's comment on Privileging the Question by (
- 15 Jun 2013 23:03 UTC; 2 points) 's comment on Changing Systems is Different than Running Controlled Experiments—Don’t Choose How to Run Your Country That Way! by (
- 12 Jan 2014 14:28 UTC; 2 points) 's comment on Why I haven’t signed up for cryonics by (
- 13 Jun 2013 6:44 UTC; 2 points) 's comment on Effective Altruism Through Advertising Vegetarianism? by (
- 2 Nov 2013 16:31 UTC; 2 points) 's comment on Which subreddits should we create on Less Wrong? by (
- 9 Nov 2018 9:44 UTC; 2 points) 's comment on Help Me Refactor Myself I am Lost by (
- What do you think of cognitive types and MBTI? What type are you? What do you think is the percentage of the 16 different personality types on LessWrong? by 18 Jul 2019 20:20 UTC; 1 point) (
- 16 Apr 2013 19:26 UTC; 1 point) 's comment on Open Thread, April 15-30, 2013 by (
- 7 Dec 2012 23:06 UTC; 1 point) 's comment on 2012 Less Wrong Census/Survey by (
- 24 Jan 2014 18:14 UTC; 1 point) 's comment on 2013 Survey Results by (
- 15 Oct 2017 19:53 UTC; 1 point) 's comment on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family by (
- 24 Dec 2012 8:25 UTC; 0 points) 's comment on LessWrong Survey Results: Do Ethical Theories Affect Behavior? by (
- 26 Jul 2013 19:15 UTC; 0 points) 's comment on Making Rationality General-Interest by (
- 1 Dec 2013 20:08 UTC; 0 points) 's comment on 2013 Less Wrong Census/Survey by (
- 5 May 2013 14:46 UTC; 0 points) 's comment on Mortal: A Transponyist Fanfiction by (
- 30 Nov 2012 9:16 UTC; 0 points) 's comment on Overconfident Pessimism by (
- 8 Jan 2014 1:21 UTC; 0 points) 's comment on New Year’s Prediction Thread (2014) by (
- 11 Sep 2014 2:42 UTC; 0 points) 's comment on 2013 Survey Results by (
- 27 Dec 2012 1:22 UTC; 0 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- LessWrong IQ Survey by 2 Dec 2012 21:53 UTC; -2 points) (
- 24 Dec 2012 5:30 UTC; -4 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 20 Aug 2013 1:35 UTC; -6 points) 's comment on Open thread, July 29-August 4, 2013 by (
- 2 Jan 2013 4:18 UTC; -9 points) 's comment on Politics Discussion Thread January 2013 by (
- Stop Using LessWrong: A Practical Interpretation of the 2012 Survey Results by 30 Dec 2012 22:00 UTC; -55 points) (
Hi Yvain,
please state a definite end date next year. Filling out the survey didn’t have a really high priority for me, but knowing that I had “about a month” made me put it off. Had I known that the last possible day was the 26th of November, I probably would have fit it in sometime in between other stuff.
Hm, could it be that the longer survey format this time around cut down on the number of responses as well?
So, what “cut down on the number of responses”?
I presume that John’s indended implication was that community growth was greater than indicated by the number of respondents.
The calibration question is an n=1 sample on one of the two important axes (those axes being who’s answering, and what question they’re answering). Give a question that’s harder than it looks, and people will come out overconfident on average; give a question that’s easier than it looks, and they’ll come out underconfident on average. Getting rid of this effect requires a pool of questions, so that it’ll average out.
Yep. (Or as Yvain suggests, give a question which is likely to be answered with a bias in a particular direction.)
It’s not clear what you can conclude from the fact that 17% of all people who answered a single question at 50% confidence got it right, but you can’t conclude from it that if you asked one of these people a hundred binary questions and they answered “yes” at 50% confidence, that person would only get 17% right. The latter is what would deserve to be called “atrocious”; I don’t believe the adjective applies to the results observed in the survey.
I’m not even sure that you can draw the conclusion “not everyone in the sample is perfectly calibrated” from these results. Well, the people who were 100% sure they were wrong, and happened to be correct, are definitely not perfectly calibrated; but I’m not sure what we can say of the rest.
I have often pondered this problem with respect to some of the traditional heuristics and biases studies, e.g. the “above-average driver” effect. If people consult their experiences of subjective difficulty at doing a task, and then guess they are above average for the ones that feel easy, and below average for the ones that feel hard, this will to some degree track their actual particular strengths and weaknesses. Plausibly a heuristic along these lines gives overall better predictions than guessing “I am average” about everything.
However, if we focus in on activities that happen to be unusually easy-feeling or hard-feeling in general, then we can make the heuristics look bad by only showing their successes and not their failures. Although the name “heuristics and biases” does reflect this notion: we have heuristics because they usually work, but they produce biases in some cases as an acceptable loss.
I would agree that this explains the apparent atrocious calibration. It’s worth an edit to the main post. No reason to beat ourselves up needlessly.
People were answering different questions in the sense that they each had an interval of their own choosing to assign a probability to, but obviously different people’s performance here was going to be strongly correlated. Bayes just happens to be the kind of guy who was born surprisingly early. If everyone had literally been asked to assign a probability to the exact same proposition, like “Bayes was born before 1750” or “this coin will come up heads”, that would have been a more extreme case. We’d have found that events that people predicted with probability x% actually happened either 0% or 100% of the time, and it wouldn’t mean people were infinitely badly calibrated.
All of that also applies to the year calibration questions in previous surveys and yet people did much better in those.
Because they weren’t about events that occurred surprisingly early.
Yes, and this is probably worth an edit to the original post. For a more extreme example, consider what would happen if you asked a large group of people to assess the probability that the same coin would come up heads. You’d find that events that people said would happen 50% of the time happened either 0% or 100% of the time, but it would be wrong to conclude they were atrociously calibrated.
I previously mentioned that item non-response might be a good measure of Conscientiousness. Before doing anything fancy with non-response, I first checked that there was a correlation with the questionnaire reports. The correlation is zero:
I am completely surprised. The results in the economics paper looked great and the rationale is very plausible. Yet… The 2 sets of data here have the right ranges, there’s plenty of variation in both dimension, I’m sure I’m catching most of the item non-responses or NAs given that there are non-responses as high as 34, there’s a lot of datapoints, and it’s not that the correlation is the opposite direction which might indicate a coding error but that there’s none at all. Yvain questions the Big Five results, but otherwise they look exactly as I would’ve predicted before seeing the results: low C and E and A, high O, medium N.
There may be something very odd about LWers and Conscientiousness; when I try C vs Income, there’s a almost-zero correlation again:
I guess the next step is a linear model on income vs age, Conscientiousness, and IQ:
So all of them combined don’t explain much and most of the work is being done by the age variable… There’s many high-income LWers, supposedly (in this subset of respondents reporting age, income, IQ, and Conscientiousness, the max is 700,000), so I’d expect a cumulative r^2 of more than 0.173 for all 3 variables; if those aren’t governing income, what is? Maybe everyone working with computers is rich and the others poor? Let’s look at everyone who submitted salary and profession and see whether the practical computer people are making bank:
Wow. Just wow. 76k vs 43k. I mean, maybe this would go away with enough fiddling (eg. cost-of-living) but it’s still dramatic. This suggests a new theory to me: maybe Conscientiousness does correlate with income at its usual high rate for everyone but computer people who are simply in so high demand that lack of Conscientiousness doesn’t matter:
So for the CS people the correlation is small and non-statistically-significant, for non-CS people the correlation is almost 3x larger and statistically-significant.
There is a correlation of 0.13 between non-responses and N.
Of course, there’s also a correlation of −0.13 between C and the random number generator.
People who had seen the RNG give a large number were primed to feel unusually reckless when taking the Big 5 test. Duh. (Just kidding.)
Were you expecting that people with high C would or wouldn’t skip questions? I can see arguments either way. Conscientious people might skip questions they don’t have answers to or that they aren’t willing to put the time into to give a good answer, or they might put in the work to have answers they consider good to as many questions as possible.
Is it feasible to compare wrong sort of answer with C?
Is it possible that the test for C wasn’t very good?
Wouldn’t; that was the claim of the linked paper.
Not really, if it wasn’t caught by the no-answer check or the NA check.
As I said, it came out as expected for LW as a whole, and it did correlate with income once the CS salaries were removed… Hard to know what ground-truth there could be to check the scores against.
I am also surprised by this. I wonder about the effect of “I’m taking this survey so I don’t have to go to bed / do work / etc.,” but I wouldn’t have expected that to be as large as the diligence effect.
Also, perhaps look at nonresponse by section? I seem to recall the C part being after the personality test, which might be having some selection effects.
What do you mean? I can’t compare non-response with anyone who didn’t supply a C score, and there were plenty of questions to non-response on after the personality test section.
It seems to me that other survey non-response may be uncorrelated with C once you condition on taking a long personality survey, especially if the personality survey doesn’t allow nonresponse. (I seem to recall taking all of the optional surveys and considering the personality one the most boring. I don’t know how much that generalizes to other people.) The first way that comes to mind to gather information for this is to compare the nonresponse of people who supplied personality scores and people who didn’t, but that isn’t a full test unless you can come up with another way to link the nonresponse to C.
I was thinking it might help to break down the responses by section, and seeing if nonresponse to particular sections was correlated with C, but the result could only be that some sections are anticorrelated if a few are correlated. So that probably won’t get you anything.
Why would the strong correlation go away after adding a floor? That would simply restrict the range… if that were true, we’d expect to see a cutoff for all C scores but in fact we see plenty of very low C scores being reported.
Yes. You’d expect, by definition, that people who answered the personality questions would have fewer non-responses than the people who didn’t… That’s pretty obvious and true:
Note also that in the last two surveys the mean and median answers were approximately correct, whereas this time even the first quartile answer was too late by almost a decade. So it’s not just a matter of overconfidence—there also was a systematic error. Note that Essay Towards Solving a Problem in the Doctrine of Chances was published posthumously when Bayes would have been 62; if people estimated the year it was published and assumed that he had been approximately in his thirties (as I did), that would explain half of the systematic bias.
To expand on this: Confidence intervals that are accurate for multiple judgements by the same person may be accurate for the same judgement made by multiple people. Normally, we can group everyone’s responses and measure how many people were actually right when they said they were 70% sure. This should average out to 70% is because the error is caused by independent variations in each person’s estimate. If there’s a systematic error, then even if we all accounted for the systematic error in our confidence levels, we would all still fail at the same time if there was an error.
I had a vaguely right idea for the year of publication, and didn’t know it was posthumous, but assumed that it was published in his middle-to-old age and so got the question right.
This question was biased against people who don’t believe in history.
For my answer, which was wildly wrong, I guesstimated by interpolating backward using the rate of technological and cultural advance in various cultures throughout my lifetime, the dependency of such advances on Bayesian and related logics, with an adjustment for known wars and persecution of scientists and an assumption that Bayes lived in the western world. I should have realized that my confidence on estimates of each of these (except the last) was not very good and that I really shouldn’t have had any more than marginal confidence in my answer, but I was hoping that the sheer number of assumptions I made would approach statistical mean of my confidences and that the overestimates would counterbalance the underestimates.
The real lesson I learned from this exercise is that I shouldn’t have such high confidence in my ability to produce and compound a statistically significant number of assumptions with associated confidence levels.
Have you read Malcolm Gladwell—Blink? It’s a fun book that doesn’t take too long, which hella makes up for the occasional failure of rigor. Anyhow, the conclusion is that even on hard problems, expert-trusted models will still have very few parameters. And those parameters don’t have to be the same things you’d use if you were a perfect reasoner—what’s important is that you can use it as an indicator.
I personally had error bars of 75 years on my confidence and was 74 years off. I’m not sure if I translated that correctly into percent certainty of being within 20 years of correct, but I felt okay about the result. This might be another source of systematic error?
On IQ Accuracy:
As Yvain says, “people have been pretty quick to ridicule this survey’s intelligence numbers as completely useless and impossible and so on” because if they’re true, it means that the average LessWronger is gifted. Yvain added a few questions to the 2012 survey, including the ACT and SAT questions and the Myers-Briggs personality type question that I requested (I’ll explain why this is interesting), and that give us a few other things to check against, which has made the figures more believable. The ridicule may be an example of the “virtuous doubt” that Luke warns about in Overconfident Pessimism, so it makes sense to “consider the opposite”:
The distribution of Myers-Briggs personality types on LessWrong replicates the Mensa pattern. This is remarkable since the patterns of personality types here are, in many significant ways, the exact opposite of what you’d find in the regular population. For instance, the introverted rationalists and idealists are each about 1% of the population. Here, they are the majority and it’s the artisans and guardians who are relegated to 1% or less of our population.
Mensa’s personality test results were published in the December 1993 Mensa Bulletin. Their numbers.
So, if you believe that most of the people who took the survey lied about their IQ, you also need to believe all of the following:
That most of these people also realized they needed to do IQ correlation research and fudge their SAT and ACT scores in order for their IQ lie to be believable.
Some explanation as to why the average of lurker’s IQ scores would come out so close to the average of poster’s IQ scores. The lurkers don’t have karma to show off, and there’s no known incentive good enough to get so many lurkers to lie about their IQ score. Vaniver’s figures.
Some explanation for why the personality type pattern at LessWrong is radically different from the norm and yet very similar to the personality type pattern Mensa published and also matched my predictions. Even if they had knowledge of the Mensa personality test results and decided to fudge their personality type responses, too, they somehow managed to fudge them in such a way that their personality types accidentally matched my predictions.
That they decided not to cheat when answering the Bayes birthday question even though they were dishonest enough to lie on the IQ question, motivated to look intelligent, and it takes a lot less effort to fudge the Bayes question than the intelligence and personality questions. (This was suggested by ArisKatsaris).
That both posters and lurkers had some motive strong enough to justify spending 20+ minutes doing the IQ correlation research and fudging personality test questions while probably bored of ticking options after filling out most of a very long survey.
It’s easier just to put the real number in the IQ box than do all that work to make it believable, and it’s not like the liars are likely to get anything out of boasting anonymously, so the cost-benefit ratio is just not working in favor of the liar explanation.
If you think about it in terms of Occam’s razor, what is the better explanation? That most people lied about their IQ, and fudged their SAT, ACT and personality type data to match, or that they’re telling the truth?
Summary of criticism:
Possible Motive to Lie: The desire to be associated with a “gifted” group:
In re to this post, it was argued by NonComposMentis that a potential motive to lie is that if the outside world perceives LessWrong as gifted, then anyone having an account on LessWrong will look high-status. In rebuttal:
I figure that lurkers would not be motivated to fudge their results because they don’t have a bunch of karma on their account to show off and anybody can claim to read LessWrong, so fudging your IQ just to claim that the site you read is full of gifted people isn’t likely to be motivating. I suggested that we compare the average IQs of lurkers and others. Vaniver did the math and they are very, very close..
I argued, among other things, that it would be falling for a Pascal’s mugging to believe that investing the extra time (probably at least $5 worth of time for most of us) into fudging the various different survey questions is likely to contribute to a secret conspiracy to inflate LessWrong’s average IQ.
Did the majority avoid filling out intelligence related questions, letting the gifted skew the results?
Short answer: 74% of people answered at least one intelligence related question and since most people filled out only one or two, the fact that the self-report, ACT and SAT score averages are so similar is remarkable.
I realized, while reading Vaniver’s post that if only 1⁄3 of the survey participants filled out the IQ score, this may have been due to something which could have skewed the results toward the gifted range, for instance, if more gifted people had been given IQ tests for schooling placement (and the others didn’t post their IQ score because they did not know it) or if the amount of pride one has in their IQ score has a significant influence on whether one reported it.
So I went through the data and realized that most of the people who filled out the IQ test question did not fill out all the others. That means that 804 people (74% not 33%) answered at least one intelligence related question. As we have seen, the IQ correlations for the IQ, SAT and ACT questions were very close to each other (unsurprisingly, it looks like something’s up with the internet test… removing those, it’s 63% of survey participants that answered an intelligence related question). It’s remarkable in and of itself that each category of test scores generated an average IQ so similar to the others considering that different people filled them out. I mean if 1⁄3 of the population filled out all of the questions, and the other 2⁄3 filled out none, we could say “maybe the 1⁄3 did IQ correlation research and fudged these” but if most of the population fills out one or two, and the averages for each category come out close to the averages for the other categories, why is that? How would that happen if they were fudging?
It does look to me like people gave whatever test scores they had and that not all the people had test scores to give but it does not look to me like a greater proportion of the gifted people provided an intelligence related survey answer. Instead it looks like most people provided an intelligence related survey answer and the average LessWronger is gifted.
Exploration of personality test fudging:
Erratio and I explored how likely it is that people could successfully fudge their personality tests and why they might do that.
There are a lot of questions on the personality test that have an obvious intelligence component, so it’s possible that people chose the answer they thought was most intelligent.
There are also intelligence related questions where it’s not clear which answer is most intelligent. I listed those.
The intelligence questions would mostly influence the sensing/intuition dichotomy and the thinking/feeling dichotomy. This does not explain why the extraversion/introversion and perceiving/judging results were similar to Mensa’s.
Alternate possibility: The distribution of personality types in Mensa/LW relative to everyone else is an artifact produced by self-identified smart people trying to signal their intelligence by answering ‘yes’ to traits that sound like the traits they ought to have.
eg. I know that a number of the T/F questions are along the lines of “I use logic to make decisions (Y/N)”, which is a no-brainer if you’re trying to signal intelligence.
A hypothetical way to get around this would be to have your partner/family member/best friend next to you as you take the test, ready to call you out when your self-assessment diverges from your actual behaviour (“hold on, what about that time you decided not to go to the concert of [band you love] because you were angry about an unrelated thing?”)
Ok, it’s possible that all of the following happened:
Most of the 1000 people decided to lie about their IQ on the LessWrong survey.
Most of the liars realized that their personality test results were going to be compared with Mensa’s personality type results, and it dawned on them that this would bring their IQ lie into question.
Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too.
Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions.
Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required.
Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They’d have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven’t verified that). They’d have to have enough of an understanding of what intelligent people are like that they’d choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted:
“You are more interested in a general idea than in the details of its realization”
(Do intelligent people like ideas or details more?)
“Strict observance of the established rules is likely to prevent a good outcome”
(Either could be the smarter answer, depending who you ask.)
“You believe the best decision is one that can be easily changed”
(It’s smart to leave your options open, but it’s also more intellectually self-confident and potentially more rewarding to take a risk based on your decision-making abilities.)
“The process of searching for a solution is more important to you than the solution itself”
(Maybe intelligence makes playing with ideas so enjoyable, gifted people see having the solution as less important.)
“When considering a situation you pay more attention to the current situation and less to a possible sequence of events”
(There are those that would consider either one of these to be the smarter one.)
There were a lot of questions that you could guess are correlated with intelligence on the test, and some of them are no-brainers, but are there enough of those no-brainers with obvious intelligence correlation that a non-gifted person intent on looking as intelligent as possible would be able to successfully fudge their personality type?
The massive fudging didn’t create some totally unexpected personality type pattern. For instance, most people are extraverted. Would they realize the intelligence implications and fudge enough extravert questions to replicate Mensa’s introverted pattern? Would they know to choose the judging questions over the perceiving questions would make them look like Mensans? It makes sense that the thinking vs. feeling and intuiting vs sensing metrics would use questions that would be of the type you’d obviously need to fudge, but why would they also choose introvert and judging answers?
The survey is anonymous and we don’t even know which people gave which IQ responses, let alone are they likely to receive any sort of reward from fudging their IQ score. Can you explain to me:
What reward would most of LessWrong want to get out of lying about their IQs?
Why, in an anonymous context where they can’t even take credit for claiming the IQ score they provided, most of LessWrong is expecting to receive any reward at all?
Can you explain to me why fudged personality type data would match my predictions? Even if they were trying to match them, how would they manage it?
“Lie” is a strawman. One could report an estimate, mis-remember, report the other “IQ” (mental age / chronological age metric), or one may have took any one of entirely faulty online tests that report IQ as high to increase the referral rate (some are bad enough to produce >100 if the answers are filled in at random).
This would be a good point in the event that we were not discussing IQ scores generated by an IQ test selected by Yvain, which many people took at the same time as filling out the survey. This method (and timing) rules out problems due to relying on estimates alone, most of the potential for mis-remembering, (neither of which should be assumed to be likely to result in an average score that’s 30 points too high, as mistakes like these could go in either direction), and, assuming that the IQ test Yvain selected was pretty good, it also rules out the problem of the test being seriously skewed. If you would like to continue this line of argument, one effective method of producing doubt would be to go to the specific IQ test in question, fill out all of the answers randomly, and report the IQ that it produces. If you want to generate a full-on update regarding those particular test results, complete with Yvain being likely to refrain from recommending this source during his next survey, write a script that fills out the test randomly and reports the results so that multiple people can run it and see for themselves what average IQ the test produces after a large number of trials. You may want to check to see whether Yvain or Gwern or someone has already done this before going to the trouble.
Also, there really were people whose concern it was that people were lying on the survey. Your “lie is a strawman” perception appears to have been formed due to not having read the (admittedly massive number of) comments on this.
Look. People misremember (and remember the largest value, and so on) in the way most favourable to themselves. While mistakes can of course go in either direction, they don’t actually go in either direction. If you ask men to report their penis size (quite literally), they over-estimate; if you ask them to measure, they still overestimate but not by as much. This sort of error is absolutely the norm in any surveys. More so here, as the calibration (on Bayes date of birth question at least) was comparatively very bad.
The situation is anything but symmetric, given that the results are rather far from the mean, on a Gaussian.
Furthermore, given the interest in self improvement, people here are likely to have tried to improve their test scores by practice, which would have considerably lower effect on iqtest.dk unless you practice specifically the Raven’s matrices.
The low scores on iqtest.dk are particularly interesting in light that the scores on the latter are a result of better assignment of priors / processing of probabilities (as, fundamentally, one needs to pick the choice which results in simplest—highest probability—overall pattern. If one is overconfident about the pattern they see being the best, one’s score is lowered, so poor calibration will hurt that test more).
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of “undiscriminating scepticism”, I am requesting to see the evidence of this.
I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it’s likely for the room for error to be pretty big. On that we agree.
An average thirty points higher than normal seems to me to be quite a lot more than “pretty big”. That’s the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that’s like having a 6-incher and saying it’s 12. I can see guys unconsciously saying it’s 7 if it’s 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they’re Mensa level gifted when they’re average. I’d bet that there was a significant amount of error, but not an average of 30 points.
If you agree with those two, then whether we agree over all just depends on what specific belief we’re each supporting.
I think these beliefs are supported:
The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate.
Despite inaccuracies, it’s very likely that the average LessWrong member has an IQ above average—in other words, I don’t think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ.
LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you’d find by random chance.
Do we agree on those three beliefs?
If not, then please phrase the belief(s) you want to support.
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership’s IQ values.
Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie.
However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards… but it’s an additional source of bias, independent of any bias in individual reported values.
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.)
(You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it’s not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
Ah! Good point! Karma for you! Now I will think about whether there is a way to figure out the truth despite this.
Ideas?
Hmmm. Tricky.
Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias.
Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won’t respond, putting us back at square one).
It’s probably also worth noting that this is a known problem in statistics which is not easy to compensate for.
There’s also the selection effect of only getting answers from “people who , when asked, can actually name their IQ”.
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they’re inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they’re self-reported. Note that I’m not saying people must be consciously lying, though I wouldn’t rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.)
The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a “high school” level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean.
There is a pretty obvious bias in that calculation: a lot of LWers are young and haven’t had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven’t yet, which could exaggerate the IQ of the high-school-or-less group.
I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren’t huge but I think they’re too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, ’cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level.
* Assuming people start university at age 18, take 3 years to get a bachelor’s, a year to get a master’s, and then up to 7 years to get a PhD, everyone who’s going to get a PhD will have one at age 29. In reality there’re a few laggards but not enough to make much difference; basically the same result comes out if I use age 30 or age 35 as a cutoff.
And I suspect if you look at the American population for that age cohort, you’ll find a lot higher a percentage than 13% which have a “high school education or less”… All you’ve shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn’t close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading ‘high school education or less’ member, Eliezer Yudkowsky.
Right, but even among LWers I’d still expect the dropouts to have a lower average IQ if all that’s going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+.
Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that’s above the horizontal line by drawing a vertical line. That’s the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they’re winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more).
Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises “the most intelligent end” of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won’t be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It’s not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
How significant is that difference?
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough.
Doing it properly, I should really compare the PhD holders’ IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
It’s a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn’t too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that’s around 25 points.
I was really asking about sample size, as I was too lazy to grab the raw data.
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve.
Any time I see “undiscriminating scepticism” mentioned, it’s a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it’s not true “undiscriminating scepticism”. If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there’s some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
Sceptics in that case, I suppose, being anyone who actually does the most basic “Bayesian” reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there’s a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I’m pretty sure most people believed that a bad question couldn’t have that much of an impact).
Wikipedia to the rescue, given that a lot of stuff is behind the paywall...
http://en.wikipedia.org/wiki/Illusory_superiority#IQ
“The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.”
and more amusingly
http://en.wikipedia.org/wiki/Human_penis_size#Erect_length
Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way—mere “above average”—it is a set of completely unremarkable beliefs.
It’d be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can’t find any info.
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting.
I endorse Epiphany’s three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it’s two standard deviations above the norm, given the data we have.
Wow, that’s quite interesting—that’s some serious Dunning-Kruger. Scatterplot could be of interest.
Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction.
Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning).
Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others “punish” them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning.
The actual data is even worse than what comparisons of prevalence would suggest − 25% of people put themselves in the top 1% in some circumstances.
Yes, average of 115 would be possible.
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
That sounds pretty high to me. I haven’t looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population.
Something similar, perhaps?
You know, people do lie to themselves. It’s a sad but true (and well known around here) fact about human psychology that humans have surprisingly bad models of themselves. It is simply true that if you asked a bunch of people selected at random about their (self-reported) IQ scores, you would get an average of more than 100. One would hope that LessWrongers are good enough at detecting bias in order to mostly dodge that bullet, but the evidence of whether or not we actually are that good at it is scarce at best.
Your unintentional lie explanation does not explain how the SAT scores ended up so closely synchronised to the IQ scores—as we know, one common sign of a lie is that the details do not add up. Synchronising one’s SAT scores to the same level as one’s IQ scores would most likely require conscious effort, making the discrepancy obvious to the LessWrong members who took the survey. If you would argue that they were likely to have chosen corresponding SAT scores in some way that did not require them to become consciously aware of discrepancies in order to synchronize the scores, how would you support the argument that they synched them on accident? If not, then would you support the argument that LessWrong members consciously lied about it?
Linda Silverman, a giftedness researcher, has observed that parents are actually pretty decent at assessing their child’s intellectual abilities despite the obvious cause for bias.
“In this study, 84% of the children whose parents indicated that they fit three-fourths of the characteristics tested above 120 IQ. ” (An unpublished study, unfortunately.)
http://www.gifteddevelopment.com/PDF_files/scalersrch.pdf
This isn’t exactly the same as managing knowledge of one’s own intellectual abilities, but if it would seem to you that parents would most likely be hideously biased when assessing their children’s intellectual abilities even though, according to a giftedness researcher, this is probably not the case, then should you probably also consider that your concern that most LessWrong members are likely to subconsciously falsify their own IQ scores by a whopping 30 points (if that is your perception) may be far less likely to be a problem than you thought?
Scores on standardized tests like SAT and ACT can be improved via hard work and lots of practice—there are abundant practice books out there for such tests. It is entirely conceivable that those self-reported IQs were generated via comparing scores on these standardized tests against IQ-conversion charts. I.e., with very hard work, the apparent IQs are in the 130+ range according to these standardised tests; but when it comes to tests that measure your native intelligence (e.g., iqtest.dk), the scores are significantly lower. In future years, it would be advisable for the questionnaire to ask participants how much time they spent in total to prepare for tests such as SAT and ACT—and even then you might not get honest answers. That brings me to the point of lying...
Not necessarily true. If the survey results show that LWers generally have IQs in the gifted range, then it allows LWers to signal their intelligence to others just by identifying themselves as LWers. People would assume that you probably have an IQ in the gifted range if you tell them that you read LW. In this case, everyone has an incentive to fudge the numbers.
erratio has also pointed out that participants might have answered those personality tests untruthfully in order to signal intelligence, so I shan’t belabour the point here.
Ok, now here is a motive! I still find it difficult to believe that:
Most of 1000 people care so much about status that they’re willing to prioritize it over truth, especially since this is LessWrong where we gather around the theme of rationality. If there’s anyplace you’d think it would be unlikely to find a lot of people lying about things on a survey, it’s here.
The people who take the survey know that their IQ contribution is going to be watered down by the 1000 other people taking the survey. Unless they have collaborated by PM and have made a pact to fudge their IQ test figures, these frequently math oriented people must know that fudging their IQ figure is going to have very, very little impact on the average that Yvain calculates. I do not know why they’d see the extra work as worthwhile considering the expected amount of impact. Thinking that fudging only one of the IQs is going to be worthwhile is essentially falling for a Pascal’s mugging.
Registration at LessWrong is free and it’s not exclusive. At all. How likely is it, do you think, that this group of rationality-loving people has reasoned that claiming to have joined a group that anybody can join is a good way to brag about their awesomeness?
I suppose you can argue that people who have karma on their accounts can point to that and say “I got karma in a gifted group” but lurkers don’t have that incentive. All lurkers can say is “I read LessWrong.” but that is harder to prove and even less meaningful than “I joined LessWrong”.
Putting the numbers where our mouths are:
If the average IQ for lurkers / people with low karma on LessWrong is pretty close to the average IQ for posters and/or people with karma on LessWrong, would you say that the likelihood of post-making/karma-bearing LessWrongers lying on the survey in order to increase other’s status perceptions of them is pretty low?
Do you want to get these numbers? I’ll probably get them later if you don’t, but I have a pile of LW messages and a bunch of projects going on right now so there will be a delay and a chance that I completely forget.
From the public dataset:
165 out of 549 responses without reported positive karma (30%) self-reported an IQ score; the average response was 138.44.
181 out of 518 responses with reported positive karma (34%) self-reported an IQ score; the average response was 138.25.
One of the curious features of the self-reports is how many of the IQs are divisible by 5. Among lurkers, we had 2 151s, 1 149, and 10 150s.
I think the average self-response is basically worthless, since it’s only a third of responders and they’re likely to be wildly optimistic.
So, what about the Raven’s test? In total, 188 responders with positive karma (36%), and 164 responders without positive karma (30%) took the Raven’s test, with averages of 126.9 and 124.4. Noteworthy is the new max and min- the highest scorer on the Raven’s test claimed to get 150, and the three sub-100 scores were 3, 18, and 66 (of which I suspect only the last isn’t a typo or error of some sort).
Only 121 users both self-reported IQ and took the Raven’s test. The correlation between their mean-adjusted self-reported IQ and mean-adjusted Raven’s test was an abysmal .2. Among posters with positive karma, the correlation was .45; among posters without positive karma, the correlation was -.11.
Thank you for these numbers, Vaniver! I should have thanked you sooner. I had become quite busy (partly with preparing my new endless September post) so I did not show up to thank you promptly. Sorry about that.
You’re welcome!
I have thought of that. But a person who wants to lie about his IQ would think this way: If I lie and other LWers do not, it is true that my impact on the average calculated IQ will be negligible, but at least it will not be negative; but if I lie and most other LWers also lie, then the collective upward bias will lead to a very positive result which would portray me in a good light when I associate myself with other LWers. So there is really no incentive to not lie.
(I’m not saying that they definitely lied; I’m merely pointing out that this is something to think about.)
Fair point; but very often the kind of clubs you join does indicate something about your personality and interests, regardless of whether you are actually an active/contributing member or not. Saying “I read LessWrong” or “I joined LessWrong” certainly signals to me that you are more intelligent than someone who joined, say, Justin Bieber’s fan club, or the Twilight fan-fiction club. And if there are numbers showing that LW readers tend to have IQs in the gifted range, naturally I would think that X is probably quite intelligent just by virtue of the fact that X reads LW.
One last point is that LWers might not be deliberately lying: Perhaps they were merely victim to the Dunning-Kruger effect when self-reporting IQs. I am not sure if there are any studies showing that intelligent people are generally less likely to fall prey to the Dunning-Kruger effect.
Last but not least, I would again like to suggest that future surveys include questions asking people how much time they spent on average preparing for exams such as the SAT and the ACT—as I pointed out previously, scores on such exams can be very significantly improved just by studying hard, whereas tests like iqtest.dk actually measure your native intelligence.
Not true. It would probably take at least 20 minutes to fudge all the stuff that has to be fudged. When you’re already fatigued from filling out survey questions, that’s even less desirable at that time. At best, this would be falling for a Pascal’s mugging. True that some people may. But would the majority of survey participants… at a site about rationality?
They were not asked to assess their own IQ they were asked to report the results of a real assessment. To report something other than the results of a real assessment is a type of lie in this case.
That’s a suggestion for Yvain. I don’t assist with the surveys.
Make a copy and post it. Most browsers have the ability to print/save pages as PDFs or various forms of HTML.
Ok I managed to dig it up!
From the December 1993 Mensa Bulletin.
* The LessWrongers were added by me, using the same calculation method as in the comment where I test my personality type predictions and are based on the 2012 survey results.
Thanks for the analysis. I agree with your conclusion.
On a less relevant note, it does feel good to see more evidence that the community we hang out with is smart and awesome.
This also explains a lot of things. People regard IQ as if it is meaningless, just a number, and they often get defensive when intellectual differences are acknowledged. I spent a lot of time doing research on adult giftedness (though I’m most interested in highly gifted+ adults) and, assuming the studies were done in a way that is useful (I’ve heard there are problems with this), and my personal experiences talking to gifted adults are halfway decent as representations of the gifted adult population, there are a plethora of differences that gifted adults have. For instance, in “You’re Calling Who A Cult Leader?” Eliezer is annoyed with the fact that people assume that high praise is automatic evidence that a person has joined a cult. What he doesn’t touch on is that there are very significant neurological differences between people in just about every way you could think of, including emotional excitability. People assume that others are like themselves, and this causes all manner of confusion. Eliezer is clearly gifted and intense and he probably experiences admiration with a higher level of emotional intensity than most. If the readers of LessWrong and Hacker News are gifted, same goes for many of them. To those who feel so strongly, excited praise may seem fairly normal. To all those who do not, it probably looks crazy. I explained more about excitability in the comments.
I also want to say (without getting into the insane amount of detail it would take to justify this to the LW crowd—maybe I will do that later, but one bit at a time) that in my opinion, as a person who has done lots of reading about giftedness and has a lot of experience interacting with gifted people and detecting giftedness, the idea that most survey respondents are giving real answers on the IQ portion of the survey seems very likely to me. I feel 99% sure that LessWrong’s average IQ really is in the gifted range, and I’d even say I’m 90%+ sure that the ballpark hit on by the surveys is right. (In other words, they don’t seem like a group of predominantly exceptionally or profoundly gifted Einsteins or Stephen Hawkings, or just talented people at the upper ends of the normal range with IQs near 115, but that an average IQ in the 130′s / 140′s range does seem appropriate.)
This says nothing about the future though… The average IQ has been decreasing on each survey for an average of about two points per year. If the trend continues, then in as many years as LessWrong has been around, LessWrong may trend so far toward the mean that LessWrong will not be gifted anymore (by all IQ standards that is, it would still be gifted by some definitions and IQ standards but not others). I will be writing a post about the future of LessWrong very soon.
Would you predict then that people who’re not gifted are in general markedly less inclined to praise things with a high level of intensity?
This seems to me to be falsified by everyday experience. See fan reactions to Twilight, for a ready-to-hand example.
My hypothesis would simply be that different people experience emotional intensity as a reaction to different things. Thus, some think we are crazy and cultish, while also totally weird for getting excited about boring and dry things like math and rationality… while some of us think that certain people who are really interested in the lives of celebrities are crazy and shallow, while also totally weird for getting excited about boring and bad things like Twilight.
This also leads each group to think that the other doesn’t get similar levels of emotional intensity, because only the group’s own type of “emotional intensity” is classified as valid intensity and the other group’s intensity is classified as madness, if it’s recognized at all. I’ve certainly made the mistake of assuming that other people must live boring and uninteresting lives, simply because I didn’t realize that they genuinely felt very strongly about the things that I considered boring. (Obligatory link.)
(Of course, I’m not denying there being variation in the “emotional intensity” trait in general, but I haven’t seen anything to suggest that the median of this trait would be considerably different in gifted and non-gifted populations.)
Ok, where do I find them?
If you have to go looking, you’re lucky.
If you want to find them in person, the latest Twilight movie is still in theaters, although you’ve missed the people who made a point of seeing it on the day of the premier.
Haha, I guess so. I am very, very nerdy. I had fun getting worldly in my teens and early 20′s, but I’ve learned that most people alienate me, so I’ve isolated myself into as much of an “ivory tower” as possible. (Which consists of me doing things like getting on my computer Saturday evenings and nerding so hard that I forget to eat.)
Not really.
What did they do when you saw them?
How do we distinguish the difference between the kind of fanaticism that mentally unbalanced people display for, say, a show that is considered by many to have unhealthy themes and the kind of excitement that normal people display for the things they love? Maybe Twilight isn’t the best example here.
I didn’t. I don’t particularly have to go out of my way to find Twilight fans, but if I did, I wouldn’t.
I think you’re dramatically overestimating the degree to which fans of Twilight are psychologically abnormal. Harlequin romance was already an incredibly popular genre known for having unhealthy themes. Twilight, like Eragon, is a mostly typical work of its genre with a few distinguishing factors which sufficed to garner it extra attention, which expanded to the point of explosive popularity as it started drawing in people who weren’t already regular consumers of the genre.
I wouldn’t be surprised if this is true.
This still does not answer the question “What sample can we use that filters out fanaticism from mentally unbalanced people to compare the type of excitement that gifted people feel to the type of excitement that everyone else feels?” Not to assume that no gifted people are mentally unbalanced… I suppose we’d really have to filter those out of both groups.
Taboo “mentally unbalanced”.
What distinction are you trying to make here?
we will all be brain-dead in 70 years.
It’s true that the downward trend can’t go on forever, and to say that it’s definitely going to continue would be (all by itself, without some other arguments) an appeal to history or slippery slope fallacy. However, when we see a trend as consistent and as potentially meaningful as the one below, it makes sense to start wondering why it is happening:
IQ Trend Analysis
I was mostly just trying to point out that you are extrapolating from a sample size of three points. Three points which have a tremendous amount of common causes that could explain the variation. Furthermore you aren’t extrapolating 10% further from the span of your data, which might be ok, but actually 100% further. You’re extrapolating for as long as we have data, which is… absurd.
One, I am used to seeing the term “sample size” applied to things like the people being studied, not a number of points done in a calculation. If there is some valid use of the term “sample size” that I am not aware of would you mind pointing me in the correct direction?
Two, I am not sure where you’re getting “three points” from. If you mean the amount of IQ points that LessWrong has lost on the studies, then it was 7.18 points, not three.
Two points per year, which could be explained in other ways, sure. No matter what the trend, it could be explained in other ways. Even if it was ten points per year we could still say something like “The smartest people got bored taking the same survey over and over and stopped.” There are always multiple ways to explain data. That possibility of other explanations does not rule out the potential that LessWrong is losing intelligent people.
Not sure what these 10% and 100% figures correspond to. If I am to understand why you said that, you will have to be specific about what you mean.
Including all of the data rather than just a piece of the data is bad why?
Three points referred to the number of surveys taken, which I didn’t bother to look up, but I believe is three.
10% and 100% referred to the time span over which these data points referred to, ie. three years. Basically, I might be OK with you making a prediction for the next three months (still probably not) but extrapolating for three years based on three years of data seems a bit much to me.
Oh I see. The problem here is that “if the trend continues” is not a prediction. “I predict the trend will continue” would be a prediction. Please read more carefully the next time. You confused me quite a bit.
If you’re not making a prediction, then it’s about as helpful as saying “If the moon crashes into North America next year, LW communities will largely cease to exist.”
Looks like Aumann at work. My own readings, though more specifically on teenage giftedness in the 145+ range, along with stuff on ASD and asperger, heavily corroborate with this.
When I was 17, my (direct) family and I had strong suspicions that I was in this range of giftedness—suspicions which were never reliably tested, and thus neither confirmed nor infirmed. It’s still up in the air and I still don’t know whether I fit into some category of gifted or special individuals, but at some point I realized that it wasn’t all that important and that I just didn’t care.
I might have to explore the question a bit more in depth if I decide to return into the official educational system at some point (I mean, having a paper certifying that you’re a genius would presumably kind of help when making a pitch at university to let you in without the prerequisite college credit because you already know the material). Just mentioning all of the above to explain a bit where my data comes from. Both of my parents and myself were all reading tons of books, references, papers and other information along with several interviews with various psychology professionals for around three months.
Also, and this may be another relevant point, the only recognized, official IQ test I ever took was during that time, and I had a score of “above 130”² (verbal statement) and reportedly placed in the 98th and 99th percentiles on the two sections of a modified WAIS test. The actual normalized score was not included in the report (that psychologist(?¹) sucked, and also probably couldn’t do the statistics involved correctly in the first place).
However, I was warned that the test lost statistical significance / representativeness / whatever above 125, and so that even if I had an IQ of 170+ that test wouldn’t have been able to tell—it had been calibrated for mentally deficient teenagers and very low IQ scores (and was only a one-hour test, and only ten of the questions were written, the rest dynamic or verbal with the psychologist). Later looking-up-stats-online also revealed that the test result distributions were slightly skewed, and that a resulting converted “IQ” of “130″ on this particular test was probably more rare in the general population than an IQ of 130 normally represents, because of some statistical effects I didn’t understand at the time and thus don’t remember at all.
Where I’m going with this is that this doesn’t seem like an isolated effect at all. In fact, it seems like most of North America in general pays way more attention to mentally deficient people and low IQs than to high-IQs and gifted individuals. Based on this, I have a pretty high current prior that many on LW will have received scores suffering from similar effects if they didn’t specifically seek the sorts of tests recommended by Mensa or the likes, and perhaps even then.
Based on this, I would expect such effects to compensate or even overcompensate for any upward nudging in the self-reporting.
=====
I don’t know if it was actually a consulting psychologist. I don’t remember the title she had (and it was all done in French). She was “officially” recognized to be in legal capacity to administrate IQ tests in Canada, though, so whatever title is normally in charge of that is probably the right one.
Based on this, the other hints I mention in the text, and internet-based IQ tests consistently giving me 150-ish numbers when at peak performance and 135-ish when tired (I took those a bit later on, perhaps six months after the official one), 135 is the IQ I generally report (including in the LW survey) when answering forms that ask for it and seems like a fairly accurate guess in terms of how I usually interact with people of various IQ levels.
Was Mensa’s test conducted on the internet? The internet has a systematic bias in personalities. For example, reddit subscriptions to each personality type reddit favor Introversion and Intuition 4,828 INTJ 4,457 INTP 1,817 INFP 1,531 INFJ
IAWYC, but “the internet” is way too broad for what you actually mean—ISTM that a supermajority of teenagers and young adults in developed countries uses it daily, though plenty of them mostly use it for Facebook, YouTube and similar and probably have never heard of Reddit. (Even I never use Reddit unless I’m following a link to a particular thread from somewhere else—but the first letter of my MBTI is E so this kind of confirms your point.)
Yeah...by “internet” what I meant was sites that most people do not know about—sites that you would only stumble upon in the course of extensive net usage. I once described it to a friend as “deep” vs “shallow” internet, with depth corresponding to the extent to which a typical visitor to the website uses the internet. Even within a website (say reddit) a smaller sub-reddit would be “deeper” than a main one.
I’m myself am actually a counterexample to my own “extroverts don’t use the internet as much” notion...but I’m only a moderate extrovert. (ENTP or ENFP depending on the test...ENTP description fits better. I listed ENTP in the survey.)
By that definition, there are many nearly disconnected “deep internets”.
Yes...i’m confused. Is this supposed to be a flaw in the definition? The idea here is to use relative obscurity to describe the degree to which a site is visited only by Internet users who do heavy exploring. There are only a few “shallow” regions… Facebook, Wikipedia, twitter...the shallowest being google. These are all high traffic and even people who never use computers have heard some of these words. There are many deep regions, on the other hand, and most are disconnected.
It is if you then proceed to claim to have statistics over users of the “deep internet”.
Yeah, different websites have different personality skews, which complicates things. Fortunately there’s evidence against Mensa having used an online sample: Epiphany said the results were published in December 1993. It’s fairly easy to give a survey to an Internet forum nowadays, but where would Mensa have found an online sample back in ’93? IRC? Usenet? (There is a rec.org.mensa where people posted about personality and the Myers-Briggs back in 1993, but the only relevant post that year was someone asking about Mensans’ personalities to no avail.)
I don’t have any more data than that, sorry.
To suggest that people on the internet may have certain personality types is a good suggestion, but it raises two questions:
Might your example of Reddit be similar to LW because LW gets lots of users from Reddit? (Or put another way, if the average LessWronger is gifted, maybe “the apple doesn’t fall far from the tree” and Reddit has lots of gifted people, too.)
Might gifted people gather in large numbers on the internet because it’s easier to find people with similar interests? (Just because people on the internet tend to have those personality types, it doesn’t mean they’re not gifted.)
As for “the internet” having a systematic bias in personalities, I would like to see the evidence of this that’s not based on a biased sample. It’s likely that the places you go to find people like you will, well, have people like you, so even if you (or somebody else on one of those sites) observed a pattern in personality types across sites they hang out on, the sample is likely to be biased.
I’d say “LW has about as many gifted people as Reddit (proportionally)” should be a sort of null hypothesis: if this is true, then people on LessWrong are not actually surprisingly smart.
I wouldn’t say that’s a reasonable null. Reddit has like 8 million users; 2% of the 310m American population is just 6.2m, so it would be difficult for Reddit to be 100% gifted while LW could easily be. The size disparity is so large that such a null seems more than a little weird.
I don’t think I understand your objection. If LW were 100% gifted (while Reddit, presumably, is not?) wouldn’t that be evidence that there’s some sort of IQ selection at work? (or, conceivably, that just being on LW makes people smarter, although I think that’s not supposed to be a thing).
I’m saying that we could, just from knowing how big Reddit is, reject out of hand all sorts of proportions of gifted because it would be nigh impossible; a set of nulls (the proportions 0-100%), many of which (all >75%) we can reject before collecting any data is a pretty strange choice to make!
Well, really what I want to ask is: is LW any different, IQ-wise, from a random selection of Redditors of the same size? Possibly stating it in terms of a proportion of “gifted” people is misleading, but that’s not as interesting anyway.
I don’t see the difference. A random selection of Redditors is going to depend on what Reddit overall looks like...
Well, I don’t see the difference either, but I’m still not entirely sure what about this hypothesis seems unreasonable to you, so I was hoping this reformulation would help.
The reasoning behind it is as follows: I figure a generic discussion board on the Internet has roughly the same IQ distribution as Reddit. If LW has a high average IQ, but so does Reddit, then presumably these are both due to the selection effect of “someone who posts on an online discussion board”. So to see if LW is genuinely smarter, we should be comparing it to Reddit, not to the Normal(100,15) distribution.
I would be shocked if that were true. Even after having grown stupendously, Reddit is still better than most discussion boards I happen to read.
Okay, fair enough. I don’t actually have much experience with Reddit.
I still think it’s a reasonable reference class. For one thing, LW runs on Reddit-based code. In particular, I would say that being significantly smarter than Reddit is a good cutoff for the feeling of smugness to start kicking in.
Maybe it just means Reddit-folk are surprisingly smart? I mean, IQ 130 corresponds to 98th percentile. The usual standard for surprise is 95th percentile.
That’s a good point—I hadn’t considered sample bias. Extending that point, though, Lesswrong and Mensa are a biased sample in more than the simple fact that the people are gifted. It is only a subset of gifted people that choose to participate in Mensa It should be mentioned, I’m using “internet” as shorthand for the “deep” internet … not facebook. I’m talking websites that most people do not use, that you’d have to spend a lot of time on the internet to find. As such, the “internet” hypothesis would predict a greater bias towards smaller sub-reddits.
Anyway, I was mostly posing an alternate hypothesis. When I first noticed the trend on the personality forums, this is what I thought was happening -
Slacking off / internet addiction selects for Perceiving and low Conscientiousness.
Non-social-networking internet use selects for Introversion.
Any forum discussing an idea without immediate practical benefits selects for iNtuition.
And then, factor in lesswrong/giftedness...
If it’s a math/science/logic topic, it selects for Thinking and iNtuition.
High scores on Raven’s matrices select for Thinking, iNtuition. High scores on Working memory components select for Judging. The ACT/SAT additionally select for Conscientiousness
Strong mathematical affinity shifts those on the border of NTP and NTJ into *NTJ (people prefer dealing with intellectually ordered systems, even if they have messy rooms and chaotic lifestyles)
A scientific/engineering ideology creates a shift towards the concrete (empirical evidence, practical gains in technology, etc) shifts those on the border of NTJ and STJ into ISTJ.
In summary, I think LW and Mensa surveys are attracting a special subset of idea driven and logical people (iNtuitives and Thinkers) and likely to use the internet often/spot the survey. (Introverts)
That’s much nicer and much more detailed. Questions this raises:
Might the “deep” internet you refer to be selecting for gifted people? (I think this is likely!)
Do we have figures on personality types and IQs for internet forums in general, not from a biased sample set? These figures would test your theory.
I agree with (1), but would claim that it also selectively attracts introverts (and I’m unsure whether or not it will bias J-P to the P side)
(2) For each of these, I tried not to look at the data after finding the poll. I made predictions first. Just for fun / to correct for hindsight bias, anyone reading might want to do the same. To play, don’t click on the link or read my prediction until you make yours. Also, here is some data which claims to represent the general population—http://mbtitruths.blogspot.com/2011/02/real-statistics.html for comparison. I’ve already seen similar data on another site, so I won’t state my predictions on this one.
A website posts stats for people who have taken the test. Unlike the above simple random sample, this selects for internet users.
http://www.personalitypage.com/html/demographics.html
Prediction: I’d consider this “shallow internet”, so very weak biases to (I). The general population is (S), I’d expect a weak bias to (N) but not enough to overcome the general population’s S centering.
Result: apparently I suck at predictions. In hindsight all the top three would be predicted score high “Fi” on a Jungian cognitive function test, and Fi in theory would be more interested in taking personality tests. But that’s hindsight, and I’m not sure if connection between MBTI and Jung hasn’t been verified empirically.
Here is a “deep internet” forum that I wouldn’t ever visit… Christian singles chat forum! This should not suffer from the sample bias you mentioned earlier (He stated that websites I visit are likely to have users with similar personalities to me [ENTP])
http://christianchat.com/christian-singles-forum/34516-meyers-briggs-type-indicator-mbti-poll.html
Prediction: I tried my best not to look at the data despite the high visual salience as soon as you open that link. Here is my prediction: I’d predict strong biases towards Introversion (because internet), slight biases towards iNtuition (because religion is idea-based), moderate bias to Feeling (I think religious people are illogical) and … let’s say a slight tilt towards Judging. Call it a hunch, life experience says that Si (judging + sensing) is particularly predisposed to religion.
Result: OK, looks like my trends were right but my magnitude was way off. My “hunch” was correct but I didn’t listen to it closely enough and vastly underestimated the Judging bias, while my personal prejudice overestimated the Feeling bias. My predictions about intution and introversion were essentially correct though.
http://personalitycafe.com/myers-briggs-forum/28171-mbti-demographics.html
Click the ppt, it has data by education.
Prediction: NT’s pursue higher education, SF’s do not. Other two dichotomies don’t matter as much, but J helps slightly.
Result: seems about right. Eyeballing, J seems not to matter much until college, at which point it prevents dropping out.
For IQ—http://asm.sagepub.com/content/3/3/225.short
Prediction- Strong N, slight T bias. I don’t think T actually means “intelligent” as I define it, but I do think it would help on some portions of the IQ test.
Result: N bias only. interesting.
Finally, Scientific aptitude: http://www.amsciepub.com/doi/pdf/10.2466/pr0.1970.26.3.711
Prediction—strong N, moderate T. I’m not sure about J-P. I think people who choose science tracks and go into academia will be P (creative types), whereas kids who get good grades but ultimately do not choose science will be J. I’m not sure which group they are looking at (I didn’t permit myself to read it yet, so I’m a bit vague on what exactly they did). I don’t think E—I will matter at all.
Result -
NT take high level science a lot more, Introverts take them slightly more. J-P is irrelevant. Intuition really helps in school at all levels. Feeling relates to high GPA in the easy courses but not the hard course (that’s pretty unexpected) Introversion relates to high GPA in hard course but not in easy courses. Percievers start out with a pretty big edge both in IQ and GPA in the lower level courses, Judging takes a slight lead in both those metrics in the advanced course. Not sure if this is noise.
Side finding—they also did IQ measurements. Again, only N related to IQ (in fact, F won out over T)...but it did not relate as much in the advanced courses. I think the advanced course chopped off the lower end of the IQ bell curve, leaving only smart Sensors. By the way, Extroverts have an IQ edge, despite getting lower grades and not taking advanced courses as often.
Thoughts? I think in general my ideas about introversion not mattering for intelligence, but mattering a lot for internet use, bear out. Apparently Thinking doesn’t really matter either...which I sort of felt was true, but I didn’t actually expect the IQ test scores to agree with me on that. It might have to do with self reported vs actual use of logic.
Of course, we are looking at the center of the bell curve, whereas on LW we are (presumably) looking at the far right edge.
EDIT: here is another IQ one with bigger sample size. http://www.psytech.com/Research/Intelligence-2009-08-11.pdf
They say that they found IQ correlates with I, N, T, and P. However, they claim that were surprised about the “I” correlation, because a large number of other studies have found that E is positively correlated. They go on to talk about how different testing conditions might favor E vs I. Some interesting further reading in there...it seems that N only correlates on the verbal reasoning section,
I’m inclined to believe the survey results myself, but there is a third possibility. If a certain personality type (or distribution of types) reflects a desire to associate with gifted people, or to be seen as gifted, we’d likely expect that to be heavily overrepresented in MENSA; that’s pretty much the reason the club exists, after all. We might also expect people with those desires to be less inclined to share average or poor IQ results, or even to falsify results.
If the same personality type is overrepresented here, then we have a plausible cause for similar personality test results and for exaggerated IQ reporting, without necessarily implying that the actual IQ distributions are similar.
Looking at Groups of IQs:
I acknowledge that the sample set for the highest IQ groups are, of course, rather small, but that’s all we’ve got. What’s been happening with the numbers for the highest IQ groups, if indicative of what’s really happening, is not encouraging. The highest two groups have decreased in numbers while the lowest two have increased. Also, it looks like the prominence of each group has shifted over time such that the highest group went from being 1⁄5 to 1⁄20 and the moderately gifted and normal groups have grown substantially.
Exceptionally Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ of 160 or more)
2009: 11 (7%)
2011: 27 (3%)
2012: 22 (2%) (Decreased)
Highly Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ between 145-159)
2009: 17 (11%)
2011: 88 (9%)
2012: 81 (7%) (Decreased)
Moderately Gifted Respondents (Self-Reported IQ)
(Defined as having an IQ between 132-144)
2009: 22 (14%)
2011: 125 (13%)
2012: 149 (11%) (Increased)
Normal Respondents (Self-Reported IQ)
(Defined as having an IQ between 100-131)
2009: 11 (7%)
2011: 91 (10%)
2012: 94 (9%) (Increased)
Each Group as a Percentage of Total IQ Respondents, by Year:
2009 Group IQ Distribution (As a percentage of 61 total IQ respondents)
18% Exceptionally Gifted
28% Highly Gifted
36% Moderately Gifted
18% Normal IQ
2011 Group IQ Distribution (As a percentage of 331 total IQ respondents)
8% Exceptionally Gifted
27% Highly Gifted
38% Moderately Gifted
28% Normal IQ
2012 Group IQ Distribution (As a percentage of 346 total IQ respondents)
6% Exceptionally Gifted
23% Highly Gifted
43% Moderately Gifted
27% Normal IQ
I don’t find it that hard to see why Lesswrong and Mensa would both select for introverted personalities. Do you?
I think most sensible people can deduce that IQ is positively correlated with SAT and ACT and all of them are positively correlated with “status”. I agree that SAT and ACT are more difficult to fudge though. I haven’t ever done either of them. Can they be easily redone several times? Do (smart) people liberally talk about their scores in the US?
Many people do IQ tests of different calibers several times and could just remember or report the best result they’ve gotten. There are different levels of dishonesty. “Lying” is a bit crude.
I don’t think anyone on Less Wrong has lied about their IQ. (addendum: not enough to seriously alter the results, anyway.) If you come up with a “valuing the truth” measure, LessWrong would score pretty highly on that considering the elaborate ways people who post here go about finding true statements in the first place. To lie about your IQ would mean you’d have to know to some degree what your real IQ is, and then exaggerate from there.
However, I do think it’s more likely than you mention that most people on LessWrong self-reporting IQ simply don’t know what their IQ is in absolutely certain terms, since to know your adult IQ you’d have to see a psychometricist and receive an administered IQ test. iqtesk.dk is normed by Mensa Denmark, so it’s far more reliable than self-reports. You don’t know where the self-reported IQ figures are coming from—they could be from a psychometricist measuring adult IQ, or they could be from somewhere far less reliable. It could be that they know their childhood IQ was measured at somewhere around 135 for example, and are going by memory. Or they could know by memory that their SAT is 99th percentile and spent a minute to look up what 99th percentile is for IQ, not knowing it’s not a reliable proxy. Or they might have taken an online test somewhere that gave ~140 and are recalling that number. Who knows? Either way, I consider “don’t attribute to malice what you can attribute to cognitive imperfection” a good mantra here.
126 is actually higher than a lot of people think. As an average for a community, that’s really high—probably higher than all groups I can think of except math professors, physics professors and psychometricists themselves. It’s certainly higher than the averages for MIT and Harvard, anyway.
About the similarity between self-reported IQ and SAT scores: SAT scores pre-1994 (which many of the scores on here are not likely to fall into) are not reliable as IQ test proxies; Mensa no longer accepts them. This is because it is much easier to game. I tutor the SAT, and when I took the SAT prior to applying at a tutor company my reading score was 800, but in high school pre-college it was only in the mid-600s. SAT scores in reading are heavily influenced by (1) your implicit understanding of informal logic, and (2) your familiarity with English composition and how arguments/passages may be structured. Considering the SAT has contained these kinds of questions since the mid-90s, I am inclined to throw its value as a proxy IQ test out the window and don’t think you can draw conclusions about LessWrong’s real collective IQ from the reported SAT scores.
The IQTest.dk result may have given the lowest measure, but I also think it’s the most accurate measure. It would not put LessWrong in the 130s, maybe, but it would mean that the community is on the same level of intellect as, say, surgeons and Harvard professors, which is pretty formidable for a community.
Over 1000 people took the test. Statistically speaking, it should have included about 50 sociopaths. Not all sociopaths would necessarily lie on that question but considering that you’re going to have to explain why you think that none of the sociopaths lied (or pathological liars or borderlines or other types that are likely to have been included in the test results) you have chosen a position, or at least wording, which is going to be darned near impossible for you to defend.
No because to say “I know my IQ” when one doesn’t is also a lie, and that’s what it would be saying if they put ANY IQ in the box without knowing it.
Mensa is a club not a professional IQ testing center. They’re not even legally allowed to give out scores anymore. Their test scores are not considered to be accurate. For one thing, they (and iqtest.dk) do not evaluate for learning disorders. One in six gifted people has a learning disorder. Learning disorders lower one’s score and so the test should be adjusted to reflect this.
The iqtest.dk scores ARE self-reported. That is to say, the user types the IQ score into the survey box themselves. In that way, they’re equally flawed to the other intelligence questions, not “more reliable than self-reports”.
I stopped here because the rest of the comment follows the same pattern. About every other sentence in your comment is irrational. LessWrong is going to eat you alive, honey. Get out while you’re ahead.
Not if LessWrong values truthseeking activities more than the general population, or considers lying/truth-fabrication a greater sin than the general population does, or if LessWrong just generally attracts less sociopaths than the general population. If over 1000 fitness enthusiasts take a test about weight, the statistics re: obesity are not going to reflect the general population’s. Considering the CRT scores of LessWrong and the nature of this website to admire introspection and truthseeking activities, I doubt that LW would be reflective of the general population in this way.
Lies are more than untrue statements; at least, in the context of self-reports, they are conscious manipulations of what one knows to be true. Someone might think they know their IQ because they’ve taken less reliable IQ tests, or because they had a high childhood IQ, or because they extrapolated their IQ from SAT scores, or for a host of other reasons. In this case they haven’t actually lied, they’ve just stated something inaccurate.
Someone could put an IQ when they have no idea what their IQ is, yes, in the sense that they have never taken a test of any sort and have no idea what their IQ would be if they took one, even an inaccurate one. I don’t think many people here would do that, though, because of the truthseeking reasons mentioned earlier.
Mensa doesn’t need to be a professional IQ testing center for their normings to be accurate, however. I am also not sure how not accounting for learning disorders would seriously alter IQTest.dk’s validity over self-reports.
However, it’s inaccurate to say that because someone puts their number in the box from IQTest.dk that they’re “equally flawed” to the other intelligence questions. Someone who self-reports an IQ number, any number, may not know if that number was obtained using accurate methodology. It may be an old score from childhood, and childhood IQ scores vary wildly compared to adult IQ scores. It may be an extrapolation from SAT scores, as I mentioned above. There are a number of ways in which self-reported IQ differs from reported IQtest.dk IQ.
This reads as unnecessarily tribalistic to me. I take it you think I am an undiscriminating skeptic? In any case, cool it.
I’d expect Less Wrongers to be more likely to be sociopaths than average. We’re generally mentally unusual.
Yeah, I am perfectly aware that the IQ score I got when I was three wasn’t valid then and certainly isn’t now. The survey didn’t ask “What’s a reasonable estimate of your IQ?”.
That is why I used the wording “statistically speaking”—it is understood to mean that I am working from statistics that were generated on the overall population as opposed to the specific population in question. You are completely ignoring my point which is that you have chosen a position which is going to be more or less impossible to defend. That position was:
It’s considered very rude to completely ignore someone’s argument and nit pick at their wording. That is what you just did.
Now it’s like you’re trying to make up a new definition of the word lying so you can continue to think your ridiculous assessment that:
By the common definition of the word “lie” producing a number when you do not know the number definitely does qualify as a lie. You’re not fooling me by trying to make a new definition of the word “lie” in this context. This behavior just looks ridiculous to me.
But they do need to provide a professional IQ testing service if they want their norms to mean something. The iqtest.dk might turn out to be a better indicator of visual-spatial ability than IQ, or it might discriminate against autistics, which LW might have an unusually large number of (seeing as how there are a lot of CS people here).
Here you go twisting my wording. I specifically said:
The only reason I’m responding to you is because I am hoping you will see that you need to do more work on your rationality. Please consider getting some rationality training or something.
The general population would contain 50 sociopaths to 1000; I don’t think LessWrong contains 50 sociopaths to 1000. Rationality is a truth-seeking activity at its core, and I suspect a community of rationalists would do their best to avoid lying consciously.
I am not sure what “the common definition of the word ‘lie’” is, especially since there are a lot of differing interpretations of what it means to lie. I know that wrong answers are distinct from lies, however. I think that a lot of LessWrong people might have put an IQ that does not reflect an accurate result. But I doubt that many LessWrong people have put a deliberately inaccurate result for IQ. Barring “the common definition” (I don’t know what that is), I’m defining “stating something when you know what you are stating is false” as a lie, since someone can put a number when they don’t know for sure what the true number is but don’t know that the number they are stating is false either.
I don’t know what you mean by “mean something” with respect to Mensa Denmark’s normings. They will probably be less accurate than a professional IQ testing service, but I don’t know why they would be inaccurate or “meaningless” by virtue of their organization not being a professional IQ testing service.
The only way I can think of in which the self-reported numbers would be more accurate than the IQTest.dk numbers is if the LW respondents knew that their IQ numbers were from a professional testing service and they had gone to this service recently. But since the test didn’t specify how they obtained this self-report, I can’t say, nor do I think it’s likely.
IQTest.dk uses Raven’s Progressive Matrices which is a standard way to measure IQ across cultures. This is because IQ splits between verbal/spatial are not as common. It wouldn’t discriminate against autistics, because it actually discriminates in favor of autistics; people with disorders on the autism spectrum are likely to score higher, not lower.
I’m not sure how the bolding of “in that way” bolsters your argument. Paraphrased, it would be “in the way that the user types the IQ score into the survey box themselves, the IQTest.dk questions are equally flawed to the other intelligence questions.” But this neglects to consider that the source of the number is different; they are self-reports in the sense that the number is up to someone to recall, but if someone types in their IQTest.dk number you know it came from IQTest.dk. If someone types in their IQ without specifying the source, you have no idea where they got that number from—they could be estimating, it could be a childhood test score, and so on.
Remarks like these are unnecessary, especially since I’ve just joined the site.
In principle, one could make up a number or insert a number other than what they got. But I don’t think a nontrivial fraction of respondents did that.
Do you have statistics about how many sociopaths take extra-long online tests or how many sociopaths frequent rationalist forums? Or are you just talking about the percentage of sociopaths in the general population?
As a sidenote one would think that people willing to lie about their IQ would be positively correlated with people that look up Bayes’ birthdate before filling in their ‘estimation’. Anyone making a statistical analysis regarding this?
Downvoted for this statement and overall unnecessary rude tone.
Perhaps you became busy or something and did not have a chance to respond to my comment, but I am still curious about this:
I agree with that article and that’s exactly why I downvoted you. You were contemptuously calling someone ‘honey’ and behaving like an all-around dick that smirks at newcomers and warns them that the rest of us will chew them up. That’s not what I want to see belong here and I won’t be a pacifist about it when you’re behaving like a weed.
I read the article again very carefully, trying to figure out whether Eliezer was advocating weeding people for behaving the way that I did. The article is about keeping “fools” out of the “garden of intelligent discussion”. It says nothing about the tone of posts and what tones should be weeded.
Actually, that was intended to make the tone friendlier. I acknowledge that this is not the way that you perceived it. My feeling is not contempt. I just don’t think he is likely to contribute constructively.
I like newcomers. The sole problem here was that about half of what this specific newcomer said was irrational.
That is exactly how I felt when I saw alfredmacdonald posting a bunch of irrational thoughts in this place for rationality. Are we both doing something wrong, then? The tone of your last comment doesn’t look any different to me than the tone of my comments. Is it that you feel that using a tone like this is never justified and we’ve both made a mistake or is it that you believe it’s okay to speak like this to people you feel are rude, but not to people you think are being irrational?
I speak to you like this because the simple explanation of “I downvoted you for excessive rudeness” doesn’t seem to satisfy you, to the point that you keep asking me for further clarification (and re-asked me when I ignored your first question). So I have to change my tone, because though the repetition of the same clarification “I downvoted you for excessive rudeness” should be adequate, you don’t get it.
Let me mention that I won’t continue discussing this, and if you continue pestering me you’ll be incentivize me to not offer any clarification at all for future downvotes towards your person, to just downvote you without explanation.
I see that you’re not interested in discussing the original issue that started this. I know that everyone has limited energy, so I accept this. It feels important to mention that none of my comments were written with an intent to pester you. I am not disagreeing with you about how you experienced them—different people experience things differently. I only mean to tell you that I did not intend to cause you this experience.
My intent was to understand your point of view better to see if our disagreement over whether a cold tone is justified for the purpose of garden-keeping would be resolved or if I would learn anything.
I hope you can see that despite our disagreement about how to protect the quality of the discussion area, we both care about whether the quality of the discussion area is good, and are willing to take action to protect it. I am not trying to troll; this wasn’t for “lulz”. I am doing it because I care. We have that one thing in common.
For this reason, I would prefer to use a friendly or neutral tone with you in the future. You may or may not be interested in putting this difference aside in order to have smoother interactions in the future, but I am willing to, so I invite you to do the same.
What do you say?
The verbiage “statistically speaking” was supposed to imply an acknowledgement that I know that the statistics were based on the overall population, not the specific context.
Ooh. This is a very, very good point. And if the survey participants really wanted to look gifted, they’d have probably decided that fudging the Bayes question was a necessity. I added your thought to my IQ accuracy comment. Upvote.
Thank you for explaining your downvote.
Is it that you disagree with Well-Kept Gardens Die By Pacifism or that you do this in a different way? If you have some different method, what is it?
Note: It’s probably inevitable that someone will ask me why I seem to agree with the spirit of this article, if I don’t believe in “elitism”. My answer is, succinctly, that humans seek like-minded people to hang out with, that this is part of fulfilling one’s social needs, and that it’s silly to allow our attempt to get basic needs met to be politicized and called “elitism” just because we gather around intellectualism when it is no different from the desire of a single mom to spend some time out with adults because children can’t have the same conversations or the desire of hunting-minded people to engage in activities without vegetarians harping on them (or vice versa).
That should be on a T-shirt.
I think that’s my favorite description on that list.
I’d buy that shirt. This is instant classic.
http://www.spreadshirt.com/design-your-own-t-shirt-C59/product/103759664/view/1/sb/l I thinks it a nice robot, but maybe some of our art-inclined people would like to design a robot god thats got a Harry-Potterish feel about it?
I’m envisioning a robot in the classic Sistine Chapel God pose, only with menacingly glowing red eyes. Instead of pointing with its finger, it’s holding a wand. There’s a wizard hat on its head.
The image could be done in silhouette, for that extra-stylized look.
If I had any artistic skill, I’d draw it myself :-/
Spinoff is misspelled.
sigh Fixed: http://www.spreadshirt.com/design-your-own-t-shirt-C59/product/103760337/view/1/sb/l
This link takes me to a blank T-shirt design UI...
Myspace Fun Flash Generator
Yeah, this also fits my observations—I suspect that reading LW and hanging out with LW types in real life are substitute goods.
Some of the ‘descriptions of LessWrong’ can make for a great quote on the back of Yudkowsky’s book.
;-)
Pratchett always includes a quote that calls him a “complete amateur,” so there is some precedent for ostentatiously including negative reviews.
I have always despised the term “pseudointellectualism” since there isn’t exactly a set of criteria for a pseudointellectual, nor is there a process of accreditation for becoming an intellectual; the closest thing I’m aware of is, perhaps, a doctorate, but the world isn’t exactly short of Ph.D.s who put out crap. There are numerous graduate programs where the GRE/GPA combination to get in is barely above the undergrad averages, for example.
I’d like to have one of these quotes in cross-stitch to hang on my wall. (Hint: Christmas is around the corner!)
Before even reading the full details, I want to congratulate to you for the impressive amount of work. The survey period is possibly my favorite time of the year on lesswrong!
EDIT: The links for the raw csv/xls data at the bottom don’t seem to work for me.
Thank you. That should be fixed now.
It’s indeed working, thank you!
Top 100 Users’ Data, aka Karma 1000+
I was thinking about the fact that there is probably a difference between active LWers versus lurkers or newbies. So I looked at the data for the Top 100 users (actually Top 107, because there was a tie). This happily coincided with the nice Schelling point of 1000 karma. (make sense, because people are likely to round to that number.) To me, this reads as “has been actively posting for at least a year”.
So, some data on 1000+ karma people:
Slightly more likely to be male:
92.5% Male, 7.4% Female
NOTE: First percentage is for 1000+ users, second number is for all survey respondents
Much more likely to be polyamorous:
Prefer mono 36% v. 54%
Prefer poly 24% v. 13%
Uncertain 33% v. 30%
Other 4% v. 2%
About the same Age:
average 28.6 v. 27.8
About as likely to be single
51% v. 53%
Equally likely to be vegetarian
12%
Much more likely to use modafinil at least once per month:
15% v. 4%
About equal on intelligence tests
SAT out of 1600: 1509 v. 1486
SAT out of 2400: 2260 v. 2319
Self reported IQ: 138.5 v. 138.7
online IQ test: 127 v. 126
ACT score: 33.3 v. 32.7
Similar income
50k
Slightly lower Autism quotient:
average 22 v. 24
More likely to choose torture
Torture: 42% v. 22%
Dust Specks: 29% v. 37%
More likely to cooperate in a Prisoner’s Dilemma:
Cooperate: 36% v. 27%
Defect: 20% v. 29%
Some notes: Yes, I realize my data analysis methods are not the best. Namely, that instead of comparing the people with >1000 karma to the people with <100 karma, which would have been more accurate, I just compared them to the overall results (which includes their answers). I did this because it takes much less time.
Also, a hint for other people playing with the data in Excel format: A lot of the numbers are in text format, and a pain to convert to numeric format in a way that allows you to manipulate them. The easiest work around (so long as you don’t want to do anything complicated) is to just paste the needed columns either into google spreadsheet, or into another Excel sheet that’s been formatted numerically. If you want to do something complicated you probably need to find the “right” way to fix it.
multiplying text by 1 or adding zero can often force auto conversion in excel. You can do this by past as values multiply. Shortcut keys are copy 1 then highlight data ALT+e s then v m enter.
I had prepared the following post before I came across the one by daenerys.
Here are the statistics of the members who claim to have 4000 karma or more. The sample was too small and I was too lazy to fix the data so I used medians (I did it manually). yvain can definitely do a better job since he has the data already fixed and can access the unpublished data.
Probabilities:
PManyWorld: 67.5%
PAliens: 80%
PAliens2: 20%
Psupernatural: 0.1%
PGod: 0.03%
PReligion: 0.00000005
PCryonics: 8.5
PAntiagathics: 20%
PSimulation: 10%
PWarming: 80%
PGlobalcatastrophicrisk: 75%
Singulartity: 2070
TypeofGlobalCatastrophicRisk: 10 Unfriendly AI, 7 Pandemic bioengineered, 3 Nanotech / grey goo, 2 Nuclear war, 1 Unknown Unknowns, 1 unsure
Personality:
MyersBriggs: 5 INTJ, 2 INTP, 2 ENFP, 1 ENTJ, 1 ISTP
BigFiveO: 80
BigFiveC: 35
BigFiveE: 37
BigFiveA: 38
BigFiveN: 37
IQTest: 135
AutismScore: 23
Politics:
5 Socialist, 12 Liberal, 3 Conservative, 6 Libertarian
AlternativeAlternativePolitics: 3 Moldbuggian, 2 Futarchist, 1 Technocratic, 1 Pragmatist (the rest were unremarkable).
PoliticalCompassLeftRight: 1.25
PoliticalCompassLiberty: −5.28
Vegetarians: 16%
SRS: 36%
0 INFPs with over 4k? Well, it looks like that has outed me as not filling in this year’s survey! Well, unless I was the type to be squeamish about revealing karma or identifying information in such a case (not likely!)
Nope, no one guessed whose sinister instrument this site is. Muaha.
This still suffers from selection bias—I’d imagine that people with lower IQ are more likely to leave the field blank than people with higher IQ.
I think this is only true if we’re going to also assume that the selection bias is operating on ACT and SAT scores. But we know they correlate with IQ, and quite a few respondents included ACT/SAT1600/SAT2400 data while they didn’t include the IQ; so all we have to do is take for each standardized test the subset of people with IQ scores and people without, and see if the latter have lower scores indicating lower IQs. The results seem to indicate that while there may be a small difference in means between the groups on the 3 scores, it’s neither of large effect size nor statistical significance.
ACT:
Original SAT:
New SAT:
The lack of variation is unsurprising since the (original) SAT and ACT are correlated, after all:
I’m interested in this analysis but I don’t think the results are presented nicely, and I am not THAT interested. If someone else wants to summarize the parent I promise to upvote you.
I… thought I did summarize it nicely:
That is actually better than I remembered immediately after reading it; with the data coming after the discussion my brain pattern-completed to expect a conclusion after the data. Also the paragraph is a little bit dense; a paragraph break before the last sentence might make it a little more readable in my mind.
I had already upvoted your post, regardless :)
Indeed, more than 2⁄3 of responders left the field blank, so the real IQ could be pretty much anything.
You’re fun to read. Posts explaining things and introducing terms that connect subjects and form patterns trigger reward mechanisms in the brain. This is uncorrelated to actually applying any lessons in daily life.
Two questions you might want to ask next year is “do you think it is practical and advantageous to reduce people’s biases via standardized exercises?” and “Has reading LW inspired you to try and reduce your own biases?”
This sounds like a job for cognitive psychology!
“Well-calibrated” should probably be improved to “well-calibrated about X”—it’s plausible that people have better and worse calibration about different subjects, and the samples in the survey only explored a tiny part of calibration space.
Why did you close it early? That seems entirely unnecessary.
I put a link and exhortation prominently in the #lesswrong topic from the day the survey opened to the day it closed.
3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population?
So ~3x more people prefer polyamory than are actually engaged in it...
Impressive.
Woot! And I’m not even trying or linking LW especially often.
(I am also pleased by the nicotine and modafinil results, although you dropped a number in ‘Never: 76.5%’)
So more people are against than for. Not exactly a mandate for its use.
Sounds like you did a two-tailed test. shminux’s hypothesis, which he has stated several times IIRC, is that people who can solve it will not be taken in by Eliezer’s MWI flim-flam, as it were, and would be less likely to accept MWI. So you should’ve been running a one-tailed t-test to reject the hypothesis that the can-solvers are less MWI’d. The p-value would then be something like 0.13 by symmetry.
I would not describe this as an accurate conclusion. For one thing, I currently have one partner who has other partners, so I think I am unambiguously “currently engaged in polyamory” even though I would have put 1 on the survey.
For another, I think it is reasonable to say that someone who is in a relationship with exactly one other person, but is not monogamous with that person (i.e. is available to enter further relationships) is engaged in polyamory.
Do you think your situation explains 2/3s of those who prefer polyamory?
Well, I think you can probably break it down as follows, given just the data we have:
0 partners
1 partner, looking
1 partner, not looking
2 partners+
Of those, I would say the second and fourth are unambiguously practicing poly, the third could go either way but you could say is presumptively mono, and the first probably doesn’t count (since they are actively practicing neither mono nor poly.)
If someone wants to run those numbers, I’d be curious how they come out.
The second could be people looking for replacements for their current partner, no? I wouldn’t call that unambiguous.
I don’t agree that the first doesn’t count. The Relationship Style question was about preferred style, not current active situation. It could be that 2⁄3 of the polyamorous people just can’t get a date (lord knows I’ve been there). (ETA:) Or, in the case of not looking, don’t want a date right now (somewhere I’ve also been).
I’m in the “no preference” camp, not the poly specifically, but I’m certainly there. LessWrong does seem to indirectly filter for people who are there, simply because people who aren’t are less likely to take an interest in things that would lead them to LW, IME.
TL;DR—I think it’s not that simple.
Opinion is divided as to whether poly is an orientation or a lifestyle (something one is vs. something one does).
i.e. saying someone with no partners is practising neither mono nor poly is like saying someone with no partners is not currently engaged in homo-/bi-/hetero-sexuality. (However I would accept a claim that they were engaged in asexuality.)
This is a good point.
I wonder if it’s worth even making the distinction between “lifestyle” and “act”. Thus, poly could be an orientation (“I’m not poly because I don’t want multiple partners”), lifestyle (“I’m not poly because I don’t have and I’m not actively seeking multiple partners”), and act (“I’m not poly because I don’t currently have multiple partners”).
I used to always use the “act” definition when discussing sexual orientation (“I don’t have one—I haven’t had sex with anyone lately”) to the confusion of all interlocutors.
Heh, in fact I started but then deleted as a derail some discussion of problems in activist and academic discussions of sexual orientation—what are we to make of someone whose claimed orientation (identification) does not match their current and past behaviour, which might in turn be different again to their stated actual preferences.
I’m not current in my academic reading of sexuality, but when I was, anyone researching from a public health perspective went with behaviour, while psychologists and sociologists were split between identification and preference.
Queer activism seems to have generally gone with identification as primary, although I’m not as current there as I used to be. The trumping argument there was actually precisely your situation, where to accept behaviour as primary meant that no virgins had any orientation, and that does not agree with our intuitions or most peoples’ personal experiences.
There’s also a bi-activism point which says that position means the only “true” bisexuals are people engaged in mixed-gender group sex. (This is intended as reductio ad absurdem but I’ve heard people use it seriously.)
Poly seems to be more complicated still, q.v. distinctions between swinging, “monogamish”, open relationships, polyfidelity and polyamory. I know multiple examples of dyadic couples who regularly have sex with other people but identify as monogamous, and of couples who aren’t currently involved with anyone else, aren’t looking, but are firm in their poly identification.
I guess my TL;DR is that I’m entirely untroubled by an apparent difference between preference and practice, and if the survey had asked similar questions about sexual orientation preference & practice, we would have seen “discrepancies” there too.
What struck me was not the difference in numbers of FtM and MtF, but the fact that more than ten percent of the survey population identifying as female is MtF.
Hypothesis: those directly affected by the troll policy (trolls) are more likely to have strong disapproval than those unaffected by the troll policy are to have strong approval.
In my opinion, a strong moderation policy should require a plurality vote in the negative (over approval and abstention) to fail a motion to increase security, rather than a direct comparison to the approval. (withdrawn as it applies to LW, whose trolls are apparently less trolly than other sites I’m used to)
Hypothesis rejected when we operationalize ‘trolls’ as ‘low karma’:
Plots of the scores, regular and log-transformed:
If this were anywhere but a site dedicated to rationality, I would expect trolls to self-report their karma scores much higher on a survey than they actually are, but that data is pretty staggering. I accept the rejection of the hypothesis, and withdraw my opinion insofar as it applies to this site.
I wonder, if you split out poly/mono preference and number of partners, whether the number who prefer poly but have <2 partners would be significantly different from the number who prefer mono but have <1 partner.
Now that I’ve wondered this out loud, I feel like I should have just asked a computer.
I was about to reply the same thing. The quoted statement doesn’t sound particularly more surprising than “Most people prefer to be in a relationship, but only a fraction of those are actually engaged in one”.
Would it be more surprising to find people that prefer poly relationships, but only have one partner and aren’t looking for more, than to find people that prefer mono relationships, but have no partners and aren’t looking for any?
Among those with firm mono/poly preferences, there are 15% of the former (24% if we also include people that prefer poly, have no partners, and aren’t looking for more) and 14% of the latter.
Also, roughly 2⁄7 of people that prefer poly are single, while roughly 3⁄7 of people that prefer mono are.
Thanks, computer!
Oh, I forgot to answer your actual question. Slightly over 2⁄3 of people that prefer poly have 0 or 1 partners.
Edit: Although I guess this much was evident from the data if we assume that people that prefer mono won’t have 2 or more partners. I guess the group that doesn’t have a firm mono/poly preference (which I ignored entirely) could confuse things a bit.
So, people that prefer mono are more likely to have their preferred number of partners, but people who prefer poly have more partners.
Not by that much, but yes, I suppose a tad more.
Thanks for clearing this up.
As I understand it, there isn’t good data. Stereotypically, there are more MtF than FtM. But according to Wikipedia, a Swedish study found a ratio of 1.4:1 in favor of MtF for those requesting sexual reassignment surgery, and 1:1 for those going through with it. Of course, this is the sort of Internet community where I’d expect some folks to identify as trans without wanting to go through surgery at all.
After I posted my comment, I realized that 3 vs 16 might just reflect the overall gender ratio of LW: if there’s no connection between that stuff and finding LW interesting (a claim which may or may not be surprising depending on your background theories and beliefs), then 3 vs 16 might be a smaller version of the larger gender sample of 120 vs 1057. The respective decimals are 0.1875 and 0.1135, which is not dramatic-looking. The statistics for whether membership differs between the two pairs:
(So it’s not even close to the usual significance level. As intuitively makes sense: remove or add one person in the right category, and the ratio changes a fair bit.)
Under this theory, it seems (with low statistical confidence of course) that LW-interest is perhaps correlated with biological sex rather than gender identity, or perhaps with assigned-gender-during-childhood. Which is kind of interesting.
Does anybody know if this holds for other other preferences that tend to vary heavily by gender? Are MtoF transsexuals heavily into say programming, or science fiction? (I know of several transsexual game developers/designers, all MtoF).
I don’t know of any such data. I’d imagine that there’s less of a psychological barrier to engaging in traditionally “gendered” interests for most transgendered people (that is, if you think a lot about gender being a social construct, you’re probably going to care less about a cultural distinction between “tv shows for boys” and “tv shows for girls”). Beyond that I can’t really speculate.
Edit: here’s me continuing to speculate anyway. A transgendered person is more likely than a cisgendered person to have significant periods of their life in which they are perceived as having different genders, and therefore is likely to be more fully exposed to cultural expectations for each.
FWIW, I have the opposite intuition. Transgendered people (practically by definition) care about gender a lot, so presumably would care more about those cultural distinctions.
Contrast the gender skeptic: “What do you mean, you were assigned male but are really female? There’s no ‘really’ about it—gender is just a social construct, so do whatever you want.”
It’s more complicated than that. Gender nonconformity in childhood is frequently punished, so a great many trans people have some very powerful incentives to suppress or constrain our interests early in life, or restrict our participation in activities for which an expressed interest earns censure or worse.
Pragmatically, gender is also performed, and there are a lot of subtle little things about it that cisgender people don’t necessarily have innately either, but which are learned and transmitted culturally, many of which are the practical aspects of larger stuff (putting on makeup and making it look good is a skill, and it consists of lots of tiny subskills). Due to the aforementioned process, trans people very frequently don’t get a chance to acquire those skills during the phase when their cis counterparts are learning them, or face more risks for doing so.
Finally, at least in the West: Trans medical and social access were originally predicated on jumping through an awful lot of very heteronormative hoops, and that framework still heavily influences many trans communities, particularly for older folks. This aspect is changing much faster thanks to the internet, but you still only need to go to the right forum or support group to see this dynamic in action. There’s a lot of gender policing, and some subsets of the community who basically insist on an extreme version of this framing as a prerequisite for “authentic” trans identity.
So...when a trans person transitions, very often they are coping with some or all of this, often for the first time, simultaneously, and within a short time frame. We’re also under a great deal of pressure about all of it.
Relevant: http://xkcd.com/592/
Yeah, no idea how good my intuitions are here. I don’t have much experience with the subject, and frankly have a little difficulty vividly imagining what it’s like to have strong feelings about one’s own gender. So let’s go read Jandila’s comments instead of this one.
It’s a common inside joke amongst SF-loving, programmer trans women that there are a lot of SF-loving, programmer trans women, or that trans women are especially and unusually common in those fields. But they usually don’t socialize with large swathes of other trans women who come unsorted by any other criterion save “trans and women”; I think this is an availability bias coupled with a bit of “I’ve found my tribe!” thinking.
Yep, I’d guess that matters a great deal. (IIRC certain radical feminists dislike male-to-female transsexuals for that reason.)
That’s the explanation I’d lean towards myself.
As for the radical-feminists-versus-transsexuals thing—there seems to be a fair amount of tension between the gender/sexuality theories of different parts of the queer and feminist movements, which are generally glossed over in favor of cooperation due to common goals. Which, actually, is somewhat heartening.
Now I feel dumb for not even noticing that. “In a group where most people were born males, why is it the case that most trans people were born males?” doesn’t even seem like a question.
That sounds like hindsight bias. If there were 16 trans men and 3 trans women, you’d be saying ‘”In a group where most people currently identify as men, why is it the case that most trans people currently identify as men?” doesn’t even seem like a question.’
I can attest that this reasoning occurred to me knowing only that there were 1.3% trans women; my prediction was ‘based on my experience with trans people, this probably reflects upbringing-assigned gender, so I expect to see fewer trans men’.
Haha, that’s a great way to look at it. Had skipped over this myself too!
Now it makes me wonder which would be more significant between this and the apparent prominence of M->F over F->M that I just read some stats about (if the stats are true/reliable, 0.7 conf there).
link?
Oh, heh, sorry.
I mentioned them in a different subthread around here. The linked PDF has a few fun numbers, but didn’t notice any obvious dates or timelines. The main website hosting it has a bit more data and references from what little I looked into.
Hmm. Thanks for the link to that wikipedia page. Interesting...
...the definitions given on that wikipedia page seem to imply that I’m strongly queer and/or andro*, at least in terms of my experiences and gender-identity. Had never noticed nor cared (which, apparently, is a component of some variants of andro-somethings). I’m (very visibly) biologically male and “identify” (socially) as male for obvious reasons (AKA don’t care if miscategorized, as long as the stereotyping isn’t too harmful), and I’m attracted mostly to females because of instinct (I guess?) and practical issues (e.g. disdain of anal sex).
Oh well, one more thing to consider when trying to figure out why people get confused by my behaviors. I’ve always (in recent years anyway) thought of myself as “human with penis”.
If you can’t think of practical ways for two people with penises to have sex that don’t involve anal, you might just need better porn.
Haha, true.
Then again, I’m guessing looking at actual male-male porn would decrease the odds of that happening—which I’ve never done yet.
Same here. (But one of the reasons why I identify as male in spite of being somewhat psychologically androgynous is that I take exception with the notion that if someone doesn’t have sufficiently masculine (feminine) traits, he (she) is not a ‘real’ man (woman). And I’m almost exclusively attracted to females, almost exclusively because of ‘instinct’ (a.k.a. males just don’t give me a boner; is there a better word than “instinct”?) but also because I’d like to have biological children some day.)
Maybe the next survey should include the Bem Sex Role Inventory. (According to this, I’m slightly above median for both masculinity and femininity, and slightly more feminine than masculine.)
Yes, but I imagined someone like Eliezer might have the hypothesis that the math naturally leads to MWI and rationalists who understood the math would realize that.
Might be close enough to assume it’s due to the small sample:
No idea how reliable those numbers are, nor how they compare with elsewhere in the world. The main website that hosts that PDF should have more complete data that could be cross-referenced, if someone wants to take the time to do that.
Interesting. Going to the source of some of those numbers, it doesn’t look like there was clear specification of what they meant by “sexual orientation”, so that line of the chart is actually entirely meaningless to me. Anyone have a good guess as to how people would have answered?
AFAICT It seems to be answered in terms of the sex of their partners post-transition, i.e. a hetero MTF would prefer sexually-male partners.
The fact that the 59% stat for history of rape is symmetrical for MTF and FTM really bugs me, though. It seems to imply weird causal arrows pointing in completely opposite directions depending on whether you were originally male or female, based on my prior knowledge.
Which seems very scary, because it could also imply that MTFs are a dozen decibels more likely to be targets of rape than average females. Now I wonder if that has been taken into account when looking at the mental health stats.
Yeah, somewhere in there are some pretty disturbing violent crime stats. A notable proportion of violent crime in one country was towards trans people.
Overview for the United States
Like “FTM: 35% Heterosexual, 33% Bisexual, 18% Gay, 12% Lesbian”.
No, the data showed people who could solve the Schrodinger Equation being more likely to accept MWI, contrary to shminux’s hypothesis, so the p-value would be 0.13 in a one-tailed test for the opposite of shminux’s hypothesis. I guess that means the p-value for a one-tailed test for shminux’s hypothesis would be 0.87.
Well, there also are nine times as many male-born males as female-born females, for that matter.
See http://lesswrong.com/lw/fp5/2012_survey_results/7xfh
Thank you for this public service. It seems definitely helpful for the community, and possibly helpful for historians :-)
I now have this mental image of future sociology grad students working on their theses by reading through every article and comment ever posted on Less Wrong, and then analyzing us.
I now have an image of those sociologists giving up on reading everything and writing scripts to do some sort of ngram or inverse-markov analysis, then mis-applying statistics to draw wrong conclusions from it. Am I cynical yet?
I was actually thinking of the kind of sociology thesis that doesn’t use any statistics, and is rather a purely qualitative analysis.
I now have an image of farther future sociologists writing scathing commentaries on the irony of poorly-used statistical measures of this community.
I’m imagining them being vast posthumans with specialized modalities for it that can’t really be called “reading”.
Only if you took the SAT before 1994. Here’s the percentiles for SATs taken in 2012; someone who was 97th percentile would get ~760 on math and ~730 on critical reading, adding up to 1490 (leaving alone the writing section to keep it within 1600), and 97th percentile corresponds to an IQ of 128.
An important part of the calibration chart (for people) is the frequency of times that they provide various calibrations. Looking at your table, I would focus on the large frequency between 10% and 30%.
I’ll also point out that fixed windows are a pretty bad way to do elicitation. I tend to come from the calibration question from the practical side: how do we get useful probabilities out of subject-matter experts without those people being experts at calibration? Adopting those strategies seems more useful than making people experts at calibration.
Some Bayesian analysis using the BEST MCMC library for normal two-group comparisons:
(Full size image.)
The results are interesting and not quite the same as a t-test:
we get estimates of standard deviations, among other things, for free—they look pretty different and there’s an 85.8% chance the deviations of the Schrodinger-knowers and not-knowers are different on MWI, suggesting to me a polarizing effect where the more you know, the more extreme your view either for or against, which seems reasonable since the more information you have, the less should your uncertainty be.
the difference in means estimate is sharper than the t-test: Yvain’s t-test gave a p-value of 0.26 if the null hypothesis were true (he makes the classic error when he says “there is a 26% probability this occurs by chance”—no, there’s a 26% chance this happened by chance if one assumes the null hypothesis is true, which says absolutely nothing about whether this happened by chance).
We, however, by using Bayesian techniques can say that given the difference in mean beliefs: there is a 7.2% chance that the null hypothesis (equal belief) or the opposite hypothesis (lower belief) is true in this sample.
We also get an effect-size for free from the difference in means. −0.132 (mode) isn’t too impressive, but it’s there.
However, both BEST and the t-test are normal tests. The histograms look like the data may be a bimodal distribution: a hump of skeptics at 10%, a hump of believers in the 70%s—and the weirdly low 40s in both groups is just a low point in both? I don’t know how much of an issue this is.
For what it’s worth, I interpreted his “there is a 26% probability this occurs by chance” exactly as “if there’s no real difference, there’s a 26% probability of getting this sort of result by chance alone” or equivalently “conditional on the null hypothesis Pr(something at least this good) = 26%”. I’d expect that someone who was making the classic error would have said “there is a 26% probability this occurred by chance”.
When you discuss the calibration results, could you mention that the surveyors were told what constituted a correct answer? I didn’t take the survey and it isn’t obvious from reading this post. Also, could you include a plug for PredictionBook around there? You’ve included lots of other helpful plugs.
Done.
Maybe a plug for the Credence Game too? ;) It’s less in touch with real life than prediction book, but a lot faster.
Another result: no correlation between autism score and consequentialism endorsement.
I wonder whether consequentialism endorsement and possibly some of the probability questions correlate with the two family background questions.
Two? I see
FamilyReligion
but I dunno what your other one is. But to test family &MoralViews
:I wondered if maybe the levels were screwing things up, even though they’re in a logical order which should show any correlation if it exists, so I binned all the results into just binary ‘atheist’ and ‘theist’ (as it were), and looked at a chi-squared:
I am a little surprised. Maybe I messed up somehow.
The one about which religion.
That’s
FamilyReligion
then… I don’t see why there’d be two such questions about family religion as you seem to think.I meant RELIGIOUS BACKGROUND.
That field has 41 levels, oy gevalt (I particularly like the religious background “Mother: Jewish; Fat”). Someone else can figure out that analysis!
;-D
(Yvain should use larger text fields the next time.)
The lesson I have drawn from the survey is that free-response text fields are the devil and no one is to be trusted with them.
Yvain, I rechecked the calibration survey results, and encourage someone to recheck my recheck further:
First, these strata overlap… is 5 in 0-5 or 5-15? The N I doesn’t actually match either one get either one when I recheck.
Secondly, I am not sure what program you used to calculate the statistics, but when I checked in excel, some people used percentages that got pulled as numbers less than one. I tried to clean that for these. (also removed someone who answered 150.)
Thirdly, there are 20 people in this N. You can be either 60% correct (12 correct), or 65% correct (13 correct), but 60.2% correct in this line seems weird. 85-95: 60.2% [n = 20]
Here was my attempt at recalculating those figures: N after data cleaning was 998.
0-<5: 9.1% [n = 2⁄22]
5-<15: 13.7% [n = 25⁄183]
15-<25: 9.3% [n = 21⁄226]
25-<35: 10% [n = 20⁄200]
35-<45: 11.1% [n = 10⁄90]
45-<55: 17.3% [n = 19⁄110]
55-<65: 20.8% [n = 11⁄53]
65-<75: 22.6% [n = 7⁄31]
75-<85: 36.7% [n = 11⁄30]
85-<95: 63.2% [n = 12⁄19]
95-100: 88.2% [n = 30⁄34]
I express low confidence in these remarks because I haven’t rechecked this or gone into detail about data cleaning, but my brief take is:
1: Yes, there were some errors that made it look a bit worse than it was.
2: It’s still shows overconfidence. (Edit: see possible caveat below)
Question: Do we have enough data to determine if that hump at near 10% confidence that you are right is significant?
Edit: I’m not a statistician, but I do notice there appears to be substantially more N that answered in the lower confidence ranges. I mean, yes, on average, the people who answered in those high 55-<85 ranges were quite far off, but there were more people than answered in the 15-<25 range then all of those three groups put together.
I think the calibration data needs additional cleaning. Eyeballing, I see % signs, decimals, and English comments.
In the fair coin questions, there were two people answering 49.9, one 49.9999, one 49.999999, and one 51. :-/
Here is a paper which shows that natural coin tosses are not fair- with a 51:49 bias of the side thats “up” at the beginning. Maybe ask for the probability on an indealized coin toss next year? edit: fixed the markup
Certain tossing techniques can bias the results much more than that, as described in Probability Theory by Jaynes. But the survey did ask about a “fair coin” (emphasis added).
(For the
[text](url)
link syntax to work, you need the full URL, i.e. including the http:// bit at the start: http://comptop.stanford.edu/preprints/heads.pdf)Were they excluded from the probabilities questions?
It was stated that they should give the obvious answer and that surveys that didn’t follow the rules would be thrown out… but maybe 50% isn’t as obvious as 99.99% of the population thinks it is.
Is there any reason the prompt for the question shouldn’t have explicitly stated “(The obvious answer is the correctly formatted value equivalent to p=0.5 or 50%)”?
My working theory is that they were trolling.
Either way, should we or shouldn’t we have trusted the rest of their answers to be statistically reliable?
I see no reason to throw out their responses. They appear to just not be familiar with the terminology. To someone that does not know that “fair coin” is defined as having .5 probability for each side, they might envision it as a real physical coin that doesn’t have two heads.
Now that I think about that, lumping Protestants and Orthodoxes together and keeping Catholics separate is about as bizarre as it gets.
This is one question where the results really suprised me. Combining natural and engendered pandemics, almost a third of respondents picked it as the top near term X risk which was almost twice as many as the next highest risk. I wonder if the x risk discussions we tend to have may be somewhat misallocated.
Note that the question on the survey was not about existential risks:
I answered bio-engineered pandemics, but would have answered differently for x-risks.
Note that x-risks as defined by that questions are not the same as x-risks as defined by Bostrom. In principle, a catastrophe might kill 95% of the population but humanity could later recover and colonize the galaxy, or a different type of catastrophe might only kill 5% of the population but permanently prevent humans from creating extraterrestrial settlements, thereby setting a ceiling to economic growth forever.
So, if extraterrestrial settlements are unlikely to be ever created regardless of any catastrophe, the point is moot.
I think that the likes of Bostrom would consider anything that would prevent us from establishing extraterrestrial settlements to be a catastrophe itself, even though it’s ‘business as usual’.
Then the ‘catastrophe’ could be quite possibly intrinsic in the laws of physics and the structure of the solar system.
Many are.
I think I went for political/economic collapse, but with no very great certainty. This is probably a question which could lead to some interesting discussion.
Wiping out 90% or so of the human race without killing everyone seems unlikely in general. It wasn’t on the list, but I’d probably go for infrastructure disaster—something which could include more than one of the listed items.
Less likely than killing 100% of the human race? Why?
Remember that humanity went through bottlenecks where the total population was reduced to tens of thousands scattered in pockets of hundreds to thousands. Humanity survived the Toba super eruption in prehistoric times, and would probably survive the Chicxulub impact if it happened today.
Other than an impact powerful enough to sterilize the biosphere, I don’t see many things capable of obliterating the human species in the foreseable future. Pandemics don’t have a 100% kill rate (at least the natural ones, maybe an engineered one could, but who would be foolish enough to create such a thing?)
So many people.
A disgruntled microbiologist?
I’m not an expert, but I don’t think that a single individual, or even a small team, could do that.
The genetic variety created and maintained by sexual reproduction pretty much ensures that no single infection mechanism is effective on all individuals: key components such as the cell surface proteins and the immune system show a large phenotypic variability even among people of common ancestry living in a small geographic region (that’s also the reason why finding compatible organs for transplants is difficult).
Even for the most infectious pathogens, there is always a sizeable part of the population that is completely or partially immune.
In order to create an artificial pathogen capable of infecting and killing everybody, you have to engineer multiple redundant infection mechanisms tailored to every relevant phenotipic variation, including the rare ones. Even if your pathogen kills 99.99% of human population, far more than any natural pathogen ever did, there would be 700,000 people left, more than enough to repopulate the planet.
Is this actually true? Of course, few diseases would actually have good odds of infecting everyone, but surely that’s more a matter of exposure. [EDIT: or how you define “partial immunity”.]
By “partial immunity” I mean that you catch the disease, but only in attenuated form, maybe even subclinical or asymptomatic, and usually develop full immunity afterwards. This happened even with higly infectious diseases such as the medieval Black Death (Yersinia pestis), malaria, smallpox, and now happens with HIV.
AFAIK, a superbug capable of infecting and killing everyone doesn’t seem to be biologically plausible, at least without extensive genetic engineering.
Well, genetic engineering is a common part of scenarios like this.
However, it was my understanding that not all natural diseases grant immunity to survivors. I’m not an expert, of course.
Tetanus doesn’t grant immunity if you actually get it and survive. They are soil/intestinal bacteria normally and they don’t grow within you to a high enough number that your immune system can get a good look at them, their toxin is just potent enough that even at low concentrations it kills you.
There are also protist pathogens which express vast quantities of a particular coat protein on their surface such that when you form an adaptive immune response agains them it is almost certainly against that protein—and something like one in 10^9 cell divisions their DNA rearranges such that they start expressing a different coat protein and evade the last immune response that their host managed to raise, resetting back to no immunity.
Aha, I knew it!
That’s really interesting, actually.
I’ve been led to understand that this was usually the other way around, or that the mechanism that allowed their survival in the first place was “change something in the immune system, see if it works, repeat until it does”. Through some magical process of biology or chemistry afterwards, the found solution is then “remembered” and ready to be deployed again if the disease returns. I’m not quite sure whether anyone understands the exact mechanism behind this magic, but I certainly don’t (yet). *
By “the other way around”, I mean a selection effect; they survived because they were already more resistant and had the right biological configuration ready to become immune to it or somesuch. I’m not clear on the details, this is all second-hand (but from people who knew what they were talking about, or so it seemed at the time).
* ETA: Got curious. Looks like there’s a pretty good understanding of the matter in the field after all. +1 esteem for immunology and +0.2 for scientific medicine in general. And those are some really great wikipedia articles.
Oh, yeah, I know about that. I understood that it didn’t work on everything, though. (Well, it doesn’t work on the common cold, for a start, although I’m not sure if that kind of constant low-level mutation is feasible for more … powerful … diseases.
EDIT: turns out it is.
I don’t know about 90% of the human race, but after the recent tunnel collapse in Japan, I think infrastructure disaster is looking a lot more likely, or possibly slow, grinding infrastructure failure.
You could make a case that too much is taken by elites, or that too much is given away, but I think the big problem is that building is fun and maintenance is boring.
This survey looks like it was a massive amount of work to analyse. Three cheers for Yvain!
These are the results of the CFAR questions; I have also posted this as its own Discussion section post.
SUMMARY: The CFAR questions were all adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence.
METHOD & RESULTS
Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You’d hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction:
high scores: LWers show less bias than other populations that have answered these questions (like students at top universities)
correlation with strength of LW exposure: those who have read the sequences (have been around LW a long time, have high karma, attend meetups, make posts) score better than those who have not.
The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven’t been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer).
This is only one quick, rough survey. If the results are as predicted, that could be because LW makes people more rational, or because LW makes people more familiar with the heuristics & biases literature (including how to avoid falling for the standard tricks used to test for biases), or because the people who are attracted to LW are already unusually rational (or just unusually good at avoiding standard biases). Susceptibility to standard biases is just one angle on rationality. Etc.
Here are the question-by-question results, in brief. The comment below contains the exact text of the questions, and more detailed explanations.
Question 1 was a disjunctive reasoning task, which had a definitive correct answer. Only 13% of undergraduates got the answer right in the published paper that I took it from. 46% of LWers got it right, which is much better but still a very high error rate. Accuracy was 58% for those high in LW exposure vs. 31% for those low in LW exposure. So for this question, that’s:
LWers biased: yes
LWers less biased than others: yes
Less bias with more LW exposure: yes
Question 2 was a temporal discounting question; in the original paper about half the subjects chose money-now (which reflects a very high discount rate). Only 8% of LWers did; that did not leave much room for differences among LWers (and there was only a weak & nonsignificant trend in the predicted direction). So for this question:
LWers biased: not really
LWers less biased than others: yes
Less bias with more LW exposure: n/a (or no)
Question 3 was about the law of large numbers. Only 22% got it right in Tversky & Kahneman’s original paper. 84% of LWers did: 93% of those high in LW exposure, 75% of those low in LW exposure. So:
LWers biased: a bit
LWers less biased than others: yes
Less bias with more LW exposure: yes
Question 4 was based on the decoy effect aka asymmetric dominance aka attraction effect (but missing a control condition). I don’t have numbers from the original study (and there is no correct answer) so I can’t really answer 1 or 2 for this question, but there was a difference based on LW exposure: 57% vs. 44% selecting the less bias related answer.
LWers biased: n/a
LWers less biased than others: n/a
Less bias with more LW exposure: yes
Question 5 was an anchoring question. The original study found an effect (measured by slope) of 0.55 (though it was less transparent about the randomness of the anchor; transparent studies w. other questions have found effects around 0.3 on average). For LWers there was a significant anchoring effect but it was only 0.14 in magnitude, and it did not vary based on LW exposure (there was a weak & nonsignificant trend in the wrong direction).
LWers biased: yes
LWers less biased than others: yes
Less bias with more LW exposure: no
One thing you might wonder: how much of this is just intelligence? There were several questions on the survey about performance on IQ tests or SATs. Controlling for scores on those tests, all of the results about the effects of LW exposure held up nearly as strongly. Intelligence test scores were also predictive of lower bias, independent of LW exposure, and those two relationships were almost the same in magnitude. If we extrapolate the relationship between IQ scores and the 5 biases to someone with an IQ of 100 (on either of the 2 IQ measures), they are still less biased than the participants in the original study, which suggests that the “LWers less biased than others” effect is not based solely on IQ.
MORE DETAILED RESULTS
There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year’s survey that it doesn’t hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I’ll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third.
There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 of those answered at least one of the Intelligence questions. Further details about method available on request.
Here are the results, question by question.
Question 1: Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?
Yes
No
Cannot be determined
This is a “disjunctive reasoning” question, which means that getting the correct answer requires using “or”. That is, it requires considering multiple scenarios. In this case, either Anne is married or Anne is unmarried. If Anne is married then married Anne is looking at unmarried George; if Anne is unmarried then married Jack is looking at unmarried Anne. So the correct answer is “yes”. A study by Toplak & Stanovich (2002) of students at a large Canadian university (probably U. Toronto) found that only 13% correctly answered “yes” while 86% answered “cannot be determined” (2% answered “no”).
On this LW survey, 46% of participants correctly answered “yes”; 54% chose “cannot be determined” (and 0.4% said”no”). Further, correct answers were much more common among those high in LW exposure: 58% of those in the top third of LW exposure answered “yes”, vs. only 31% of those in the bottom third. The effect remains nearly as big after controlling for Intelligence (the gap between the top third and the bottom third shrinks from 27% to 24% when Intelligence is included as a covariate). The effect of LW exposure is very close in magnitude to the effect of Intelligence; 60% of those in the top third in Intelligence answered correctly vs. 37% of those in the bottom third.
original study: 13%
weakly-tied LWers: 31%
strongly-tied LWers: 58%
Question 2: Would you prefer to receive $55 today or $75 in 60 days?
This is a temporal discounting question. Preferring $55 today implies an extremely (and, for most people, implausibly) high discount rate, is often indicative of a pattern of discounting that involves preference reversals, and is correlated with other biases. The question was used in a study by Kirby (2009) of undergraduates at Williams College (with a delay of 61 days instead of 60; I took it from a secondary source that said “60” without checking the original), and based on the graph of parameter values in that paper it looks like just under half of participants chose the larger later option of $75 in 61 days.
LW survey participants almost uniformly showed a low discount rate: 92% chose $75 in 61 days. This is near ceiling, which didn’t leave much room for differences among LWers, and in fact there were not statistically significant differences. For LW exposure, top third vs. bottom third was 93% vs. 90%, and for Intelligence it was 96% vs. 91%.
original study: ~47%
weakly-tied LWers: 90%
strongly-tied LWers: 93%
Question 3: A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller one, about 15 babies are born each day. Although the overall proportion of girls is about 50%, the actual proportion at either hospital may be greater or less on any day. At the end of a year, which hospital will have the greater number of days on which more than 60% of the babies born were girls?
The larger hospital
The smaller hospital
Neither—the number of these days will be about the same
This is a statistical reasoning question, which requires applying the law of large numbers. In Tversky & Kahneman’s (1974) original paper, only 22% of participants correctly chose the smaller hospital; 57% said “about the same” and 22% chose the larger hospital.
On the LW survey, 84% of people correctly chose the smaller hospital; 15% said “about the same” and only 1% chose the larger hospital. Further, this was strongly correlated with strength of LW exposure: 93% of those in the top third answered correctly vs. 75% of those in the bottom third. As with #1, controlling for Intelligence barely changed this gap (shrinking it from 18% to 16%), and the measure of Intelligence produced a similarly sized gap: 90% for the top third vs. 79% for the bottom third.
original study: 22%
weakly-tied LWers: 75%
strongly-tied LWers: 93%
(continued below, due to restrictions on comment length)
(more detailed results, continued)
Question 4: Imagine that you are a doctor, and one of your patients suffers from migraine headaches that last about 3 hours and involve intense pain, nausea, dizziness, and hyper-sensitivity to bright lights and loud noises. The patient usually needs to lie quietly in a dark room until the headache passes. This patient has a migraine headache about 100 times each year. You are considering three medications that you could prescribe for this patient. The medications have similar side effects, but differ in effectiveness and cost. The patient has a low income and must pay the cost because her insurance plan does not cover any of these medications. Which medication would you be most likely to recommend?
Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year.
Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year.
Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year.
This question is based on research on the decoy effect (aka “asymmetric dominance” or the “attraction effect”). Drug C is obviously worse than Drug B (it is strictly dominated by it) but it is not obviously worse than Drug A, which tends to make B look more attractive by comparison. This is normally tested by comparing responses to the three-option question with a control group that gets a two-option question (removing option C), but I cut a corner and only included the three-option question. The assumption is that more-biased people would make similar choices to unbiased people in the two-option question, and would be more likely to choose Drug B on the three-option question. The model behind that assumption is that there are various reasons for choosing Drug A and Drug B; the three-option question gives biased people one more reason to choose Drug B but other than that the reasons are the same (on average) for more-biased people and unbiased people (and for the three-option question and the two-option question).
Based on the discussion on the original survey thread, this assumption might not be correct. Cost-benefit reasoning seems to favor Drug A (and those with more LW exposure or higher intelligence might be more likely to run the numbers). Part of the problem is that I didn’t update the costs for inflation—the original problem appears to be from 1995 which means that the real price difference was over 1.5 times as big then.
I don’t know the results from the original study; I found this particular example online (and edited it heavily for length) with a reference to Chapman & Malik (1995), but after looking for that paper I see that it’s listed on Chapman’s CV as only a “published abstract”.
49% of LWers chose Drug A (the one that is more likely for unbiased reasoners), vs. 50% for Drug B (which benefits from the decoy effect) and 1% for Drug C (the decoy). There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. Again, this gap remained nearly the same when controlling for Intelligence (shrinking from 14% to 13%), and differences in Intelligence were associated with a similarly sized effect: 59% for the top third vs. 44% for the bottom third.
original study: ??
weakly-tied LWers: 44%
strongly-tied LWers: 57%
Question 5: Get a random three digit number (000-999) from http://goo.gl/x45un and enter the number here.
Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down?
What is your best guess about the height of the tallest redwood tree in the world (in feet)?
This is an anchoring question; if there are anchoring effects then people’s responses will be positively correlated with the random number they were given (and a regression analysis can estimate the size of the effect to compare with published results, which used two groups instead of a random number).
Asking a question with the answer in feet was a mistake which generated a great deal of controversy and discussion. Dealing with unfamiliar units could interfere with answers in various ways so the safest approach is to look at only the US respondents; I’ll also see if there are interaction effects based on country.
The question is from a paper by Jacowitz & Kahneman (1995), who provided anchors of 180 ft. and 1200 ft. to two groups and found mean estimates of 282 ft. and 844 ft., respectively. One natural way of expressing the strength of an anchoring effect is as a slope (change in estimates divided by change in anchor values), which in this case is 562/1020 = 0.55. However, that study did not explicitly lead participants through the randomization process like the LW survey did. The classic Tversky & Kahneman (1974) anchoring question did use an explicit randomization procedure (spinning a wheel of fortune; though it was actually rigged to create two groups) and found a slope of 0.36. Similarly, several studies by Ariely & colleagues (2003) which used the participant’s Social Security number to explicitly randomize the anchor value found slopes averaging about 0.28.
There was a significant anchoring effect among US LWers (n=578), but it was much weaker, with a slope of only 0.14 (p=.0025). That means that getting a random number that is 100 higher led to estimates that were 14 ft. higher, on average. LW exposure did not moderate this effect (p=.88); looking at the pattern of results, if anything the anchoring effect was slightly higher among the top third (slope of 0.17) than among the bottom third (slope of 0.09). Intelligence did not moderate the results either (slope of 0.12 for both the top third and bottom third). It’s not relevant to this analysis, but in case you’re curious, the median estimate was 350 ft. and the actual answer is 379.3 ft. (115.6 meters).
Among non-US LWers (n=397), the anchoring effect was slightly smaller in magnitude compared with US LWers (slope of 0.08), and not significantly different from the US LWers or from zero.
original study: slope of 0.55 (0.36 and 0.28 in similar studies)
weakly-tied LWers: slope of 0.09
strongly-tied LWers: slope of 0.17
If we break the LW exposure variable down into its 5 components, every one of the five is strongly predictive of lower susceptibility to bias. We can combine the first four CFAR questions into a composite measure of unbiasedness, by taking the percentage of questions on which a person gave the “correct” answer (the answer suggestive of lower bias). Each component of LW exposure is correlated with lower bias on that measure, with r ranging from 0.18 (meetup attendance) to 0.23 (LW use), all p < .0001 (time per day on LW is uncorrelated with unbiasedness, r=0.03, p=.39). For the composite LW exposure variable the correlation is 0.28; another way to express this relationship is that people one standard deviation above average on LW exposure 75% of CFAR questions “correct” while those one standard deviation below average got 61% “correct”. Alternatively, focusing on sequence-reading, the accuracy rates were:
75% Nearly all of the Sequences (n = 302)
70% About 75% of the Sequences (n = 186)
67% About 50% of the Sequences (n = 156)
64% About 25% of the Sequences (n = 137)
64% Some, but less than 25% (n = 210)
62% Know they existed, but never looked at them (n = 19)
57% Never even knew they existed until this moment (n = 89)
Another way to summarize is that, on 4 of the 5 questions (all but question 4 on the decoy effect) we can make comparisons to the results of previous research, and in all 4 cases LWers were much less susceptible to the bias or reasoning error. On 1 of the 5 questions (question 2 on temporal discounting) there was a ceiling effect which made it extremely difficult to find differences within LWers; on 3 of the other 4 LWers with a strong connection to the LW community were much less susceptible to the bias or reasoning error than those with weaker ties.
REFERENCES
Ariely, Loewenstein, & Prelec (2003), “Coherent Arbitrariness: Stable demand curves without stable preferences”
Chapman & Malik (1995), “The attraction effect in prescribing decisions and consumer choice”
Jacowitz & Kahneman (1995), “Measures of Anchoring in Estimation Tasks”
Kirby (2009), “One-year temporal stability of delay-discount rates”
Toplak & Stanovich (2002), “The Domain Specificity and Generality of Disjunctive Reasoning: Searching for a Generalizable Critical Thinking Skill”
Tversky & Kahneman’s (1974), “Judgment under Uncertainty: Heuristics and Biases”
I think this might just be due to the fact that the meme that “time is money” has been repeatedly expounded on LW, rather than long-time LWers are less prone to the decoy effect. All the rot13ed discussions about that question immediately identified Drug C as a decoy and focused on whether a low-income person should be willing to pay $12.50 to be spared a three-hour headache, with a sizeable minority arguing that they shouldn’t. I’d look at the income and country of people who chose each drug—I guess the main effect is what each responded took “low income” to mean.
“time is money” seems to me a pretty common and natural way to think if you live in a society whose workers tend to be paid hourly, whether you’re new to LW or not.
Even people nominally paid hourly often cannot freely choose how many and which hours to work. (With unemployment rates as high as there are now in much of the western world, employers have more bargaining power than workers, etc.) It’s not like if I got a headache this evening, I could say “rather than having a three-hour headache, I’ll take this $12.50 drug which will stop it, work two hours and earn $20, and then have fun for one hour”.
Exactly. In South Africa that $350 could represent 16% or more of a possible yearly salary in some of our poorer areas.
Okay, now I’m confused. When I did this question, I remember I ignored C as being strictly dominated by B and pulled out a calculator. When I saw this question in the analysis, I did the same thing before scrolling down. Here’s what I got:
Drug A saves you from 70 headaches at $350/yr, for a cost of $5 per averted headache. Drug B saves you from 50 headaches at a cost of $100/yr, for a cost of $2 per averted headache.
This seems to contradict your statement “Cost-benefit reasoning seems to favor Drug A”. Drug A has a higher cost per prevented headache according to my calculations, which would make Drug B the better one. Am I failing at basic arithmetic, or misunderstanding the question, or what? Please help.
EDIT: I was solving the wrong problem, and a bunch of people showed me why. Thanks for the explanations! I’m glad I got to learn where I was wrong.
Since each drug only reduces the number of headaches to a certain number, cost per headache isn’t the right way to look at it. Compare a drug that reduces the headaches to 99/year and costs $0, to a drug that eliminates the headaches completely for $1.
Instead of comparing the cost per headache, it’s better to assign a value to time, and calculate the net benefit or harm of each drug. If we assume one hour of time is valued at $7.25, or the US minimum wage, and using the stated information that each headache lasts three hours, the free drug nets you 1*3-0=21.75, drug A nets 70*3*7.25-350=1172.50, and drug B nets 50*3*7.25-100=987.5
That’s not a good way of looking at severe pain. People often will do long hours of mind-numbing tasks in order to prevent real or imaginary future short-term discomfort, like working out to get in shape for a one-time event.
You’re right; I was generalizing from my experiences with migraines, where the pain goes away if I’m lying in a quiet, dark room
Assuming I did the math right, it seems that folks valuing their time at more than $4.16 an hour should prefer drug A, and those valuing it at less should prefer drug B. To really make this unambiguous, “low income” needs to be defined; assuming it’s at least minimum wage, drug A wins pretty clearly...
I think I did the wrong math ($ per headache saved) when taking the actual survey, sadly...
You’re right about the cost per averted headache, but we aren’t trying to minimize the cost per averted headache; otherwise we wouldn’t use any drug. We’re trying to maximize utility. Unless avoiding several hours of a migraine is worth less to you than $5 (which a basic calculation using minimum wage would indicate that it is not, even excluding the unpleasantness of migraines—and as someone who gets migraines occasionally, I’d gladly pay a great deal more than $5 to avoid them), you should get Drug A.
A hint that this analysis is worth a top-level post, perhaps?
I think you’re right; I’ve posted it to the discussion section (I guess I’ll leave it here too).
Yes, that would be interesting. Perhaps in a top-level post as Morendil suggests.
IIRC I had read the exact same question on LW before, so it might just be that plenty of LWers taking the survey also had.
How many of the people taking the $55 today have zero income?
I’d like to note that my suggestion as I offered it didn’t include an “Other” option—you added that one by yourself, and it ended up being selected by more people than “Reactionary” “Conservative” and “Communist” combined. My suggested question would have forced the current “Others” to choose between the five options provided or not answer at all.
Fishing for correlations is a statistically dubious practice, but also fun. Some interesting ones (none were very high, except e.g. Father Age and Mother Age):
IQ and Hours Writing have correlation 0.26 (75 degrees), which is the only interesting IQ correlation.
Siblings and Older siblings have correlation 0.48 (61 degrees), which isn’t too surprising , but makes me wonder: do we expect this correlation to be 0.5 in general?
Most of the Big Five answers are slightly correlated (around +/-0.25, or 90+/-15 degrees) with each other, but not with anything else except the Autism Score. Shouldn’t well-designed personality traits be orthogonal, ideally?
CFAR question 7 (guess of height of redwood) was negatively correlated with Height (-0.23, or 103 degrees). No notable correlation with the random number, though.
I looked at this with the data set that I used for my CFAR analyses, and this correlation did not show up; r=-.02 (p=.58). On closer inspection, the correlation is present in the complete un-cleaned-up data set (r=-.21), but it is driven entirely by a single outlier who listed their own height as 13 cm and the height of the tallest redwood as 10,000 ft.
(In my analyses of the anchoring question I had excluded the data of 2 outliers who listed redwood heights of 10,000 ft. and 1 ft. Before running this correlation with Height, I checked the distribution and excluded everyone who listed a height under 100 cm, since those probably represent units confusions.)
It might just pick out the cluster of “Less Wrong personality type”.
Obviously it’s a matter of perspective. Tall people just tower over those redwoods.
In that case, it says something about the cluster as well. For example, Openness and Extraversion wouldn’t be positively correlated just because most LWers are both open and extraverted (or because most LWers are closed and introverted). We’d have to have something that specifically makes “open and extraverted” more likely to happen together than individually.
Something like Berkson’s paradox (people who are neither open nor introverted are unlikely to read LW)?
Good point. Objection retracted (in the conversational sense).
“Siblings and Older siblings have correlation 0.48 (61 degrees), which isn’t too surprising , but makes me wonder: do we expect this correlation to be 0.5 in general?”
In every sibling relationship, there is one older and one younger sibling, so half of all siblings are older siblings—a line with slope 0.5.
A correlation coefficient is not a slope. (A slope changes if you multiply one of the variables by a constant, whereas a correlation doesn’t.)
EDIT: I think the slope is the correlation times the standard deviation of y divided by the standard deviation of x.
How would twins reply to these questions?
All the twins I’ve known have regarded the first-born as “older” (and one has been first-born).
In Italy traditionally it’s the other way round. (Don’t ask me why.)
So apparently, according to tradition the twin that is conceived first is believed to be born last, and there are folk explanations like “The first conceived attaches to the uterus first, so is more firmly stuck”. And so the later-born is considered oldest due to having been conceived first, even though that is not even a thing that can happen in the case of identical twins.
Still tracking down a decent history of the phenomenon, but it’s an interesting start.
That sounds very counterintuitive. Do you have a citation? I can’t find information online.
It’s something I heard from my uncles (a pair of twins) and their mother. I can find stuff online, but it’s in Italian. Googling for gemello più vecchio (Italian for ‘older twin’) does turn up relevant stuff, so it’s not something my grandma made up. EDIT: apparently there was a myth about the first to be conceived is the last to be born (which for identical twins is Not Even Wrong). Someone answered on Google Answers, “if you went in a phone booth with a friend, the first of you to get in would be the last to come out, wouldn’t she?”
My Italian should be good enough for that. Grazie!
I was surprised to see that LW has almost as many socialists as libertarians. I had thought due to anecdotal evidence that the site was libertarian-dominated.
I was also surprised that a plurality of people preferred dust specks to torture, given that it appears to be just a classic problem of scope insensitivity, which this site talks about repeatedly.
I was happy to see that we have more vegetarians and fewer smokers than the general population.
Generally, half the time we get visiting leftwingers accusing us of being rightwing reactionaries, and the other half of the time we get visiting rightwingers accusing us of being leftwing sheep.
So if you thought that the site was libertarian-dominated, I’m hereby making a prediction with 75% certainty that you consider yourself a left winger. Am I right?
There are a number of old posts from the Overcoming Bias days in which EY comments that the audience is primarily libertarian- which makes sense for the blog of a GMU economist. A partial explanation might be people reading that and assuming he’s talking about the modern population distribution of LW.
Related analysis on the public dataset:
1045 responders supplied a political orientation; they’re 30% Libertarian, 3.1% Conservative, 37% Liberal, 29% Socialist, and 0.5% Communist.
226 responders supplied a political orientation and have been around since OB; they’re 42% Libertarian, 3.5% Conservative, 31% Liberal, 23.5% Socialist, and 0% Communist.
242 responders supplied a political orientation and were referred from HPMoR; they’re 30% Libertarian, 2.5% Conservative, 37% Liberal, 30% Socialist, and 0.4% Communist.
Note that analysis of current LW users who have been here since OB is not the same as OB users several years ago, but they are still significantly more libertarian than the current mix.
Also interesting that the HPMoR distribution almost exactly equals the current mix.
Oh yes, that reminds me—I’ve always wondered if MoR was a waste of time or not in terms of community-building. So let’s divide the dataset into people who were referred to LW by MoR and people who weren’t...
Summary: they are younger, lower karma, lower karma per month participating (karma log-transformed or not), more likely to be students; but they have the same IQ (self-report & test) as the rest.
So, Eliezer is successfully corrupting the youth, but it’s not clear they are contributing very much yet.
Mean karma doesn’t seem like the relevant metric; that reflects something like the contributions of the typical MoR user, which seems less important to me than the contributions of the top MoR users. The top users in a community generally contribute disproportionately, so a more relevant metric might be the proportion of top users who were referred here from MoR.
The average user matters a lot, I think… But since you insist, here’s the top 10% of each category:
The top MoR referral user is somewhere around 10th place in the other group (which is 3.3x larger).
The average user that sticks around might matter a lot, but people with low karma are probably less likely to stick around so they’ll have less of an impact (positive or negative) on the community. So maybe look at the distribution of karma, but among veteran users resp. veteran MoR users?
What’s ‘veteran’? (And how many ways do you want to slice the data anyway...)
I imagine that when you divide karma by months in the community (while still restricting yourself to the top ten percent of absolute karma) the MoR contributors will look better. I’ll do it tonight if you don’t.
They do a bit better at the top; the sample size at “top 10%” is getting small enough that tests are losing power, though:
The interesting question might be whether people whose primary interest is HPMOR are understanding and using ideas about rationality from it.
Not sure how one would test that, aside from the CFAR questions which I don’t know how to use.
Looking at the four CFAR questions (described here), accuracy rates were:
74% OB folks (“Been here since it was started in the Overcoming Bias days”, n=253)
64% MoR folks (“Referred by Harry Potter and the Methods of Rationality”, n=253)
66% everyone else
So the original OB folks did better, but Methods influx is as good as the other sources of new readers. Breaking it down by question:
Question 1: disjunctive reasoning
OB: 52%
MoR: 42%
Other: 44%
Question 2: temporal discounting
OB: 94%
MoR: 89%
Other: 91%
Question 3: law of large numbers
OB: 92%
MoR: 85%
Other: 81%
Question 4: decoy effect
OB: 57%
MoR: 41%
Other: 49%
One possibility would be for Eliezer to ask people about it in his author’s notes when he updates HPMOR.
On the second reading, I realize that I’m asking about HPMOR and spreading rationality rather than HPMOR and community building.
Is this a typo? Or some text that was lost in the copy-paste?
Typo. I was operating on two variables,
hpmor
andothers
, but I guess a search-replace went awry...I think the site is clearly left wing slanted if you look at the demographics. Two thirds are liberal, communist or socialist with the remainder being libertarian. Conservative users especially are incredibly under-represented compared to the general population or even the university educated population.
It may however be noticeably less left wing on economic questions than similar high brow sites.
Atheism and IQ are enough to explain most of that. See this Kanazawa paper, or this Gene Expression post (using data which does not have ‘libertarian’, we know from elsewhere that atheist ‘conservatives’ are mostly fiscal and not social conservatives).
I wouldn’t expect it not to be, but this doesn’t change the problems caused by the under-representation of the political position.
Which are?
We are less likley to hear the strongest arguments in favour of those political positions
We are less likely to realize we are straw manning a position
Convenient but unjustified assumptions are less likely to be called out
Our thought and speculation about ethics and values in humans will be skewed
Conservative rationalists feel excluded from the intended audience
Since there are many many more people in the world who hold “conservative” positions than “progressive” ones we may not be properly mining a source of valuable community members.
In other words the standard pro-diversity arguments apply and arguably they applies more strongly than for some other categories it has been invoked for. I think value and political diversity is one of the best ways to as a community be able to detect motivated cognition.
“When two opposite points of view are expressed with equal intensity, the truth does not necessarily lie exactly half way between. It is possible for one side simply to be wrong.”—Richard Dawkins
I’m perfectly okay with telling people with specific political opinions that they’re wrong and should shut up. To try to use an uncontroversial example… should someone in the 1960s have cared about underrepresentation of segregationists in their discussions?
I will flat out say that I think people with reactionary view points from the past 200 years have had a remarkable prescience in predicting outcomes. It is simply that once those outcomes come about we don’t consider them bad any more, indeed we develop sacred feelings around them.
Assume you agree with all changes that occurred in the mentioned time period. Indeed assume you agree with the changes that are likely to occur in the next 20 years as well. Unless you have a good reason to believe “moral progress” is coherent and happening right now history has shown there is literally no way from preventing inhuman processes of memetic and biological evolution from grinding down your complex values.
This should be deeply disturbing.
I will bite that bullet. Actually yes they should have! Since segregationists where right about specific undesirable consequences of integration that could have been avoided with a better thought out approach or more modest goals. Indeed very basic segregationist arguments against social engineering measures that where undertaken such as forced busing are surprisingly hard to beat.
Now obviously being against such invasive social engineering or affirmative action or disparate impact doctrine is also a possible principled libertarian stance but the result is segregation so segregationists often made those arguments as well and often made them well. They where engaged in motivated cognition finding the best possible reasons against a policy just as many people where engaged in motivated cognition to find the best possible reasons for policies. You need to set up a system where those offset each other as much as possible if you want to be confident in your epistemology. If you don’t you are just writing the bottom line first and then generating the system that comes to the conclusion you want.
If you are a normal educated Western person, you have probably never read (certainly not in the course of a normal education) a non-straw-man argument against women’s suffrage, for eugenics, against parliamentary democracy or nearly any other kind of social political change our society has done for the past several centuries.
This should scare you unless you believe society without much well informed designing happens to function very much like a FAI when editing our instrumental and terminal values in unpredictable ways.
I’ve seen the lot, and far wackier, on teh webz.
Contrarians often make the mistake of taking their opponents straw man seriously. My point was more that you certainly haven’t read about such arguments in your high school history textbook or on a politics debate on the BCC or in a book on the NYT best-seller list.
You should be open to the possibility that you are wrong.
This obviously does not mean the people you want to shut up are right, but you are very much likely to pattern match people who are right and don’t agree with you with them anyway.
That’s true… most political facts aren’t as strongly confirmed as scientific facts, so you’re somewhat less justified in telling, say, someone with Mencius Moldbug’s opinions to shut up and let the grownups talk about politics than you are telling a young-earth creationist to shut up and let the grown ups talk about geology.
Untrue. Paper rejected. ;)
The first one surprises me because hardly anyone on LW seems conservative (and the polls confirm this).
I’m definitely a non-libertarian, so that may be it.
Just in the last two or three months I remember there was one guy that accused us of being the right-wing conspiracy of multibillionaire Peter Thiel (because he has donated money to the Singularity Instittue), a few who accused the whole site of transphobia (for not banning a regular user for a bigoted joke he made on a different location, not actually in this forum), one who called us the equivalent of fascist sheep (for having more people here read Mencius Moldbug on average than I suppose is the internet forum median)...
Fanatics view tolerance of the enemy as enemy action. So, yeah, I think leftwing fanatics will view anything even tolerant of either reaction or libertarianism as their enemy—even as they don’t notice that similar tolerance is extended to positions to the left of them.
However, there are a few fairly common (or at least it seems so to me) opinions on LW which are distinctively un-Left: democracy is bad, there are racial differences in important traits, and women complain way too much about how men treat them. We’ll see how that last one plays out.
I think that they appear to be more common than they actually are because their proponents are much louder than everyone else.
One of those is a factual question, not a policy question. (Also, there are plenty of left-wingers who wouldn’t throw a fit at “it appears that black people and Native Americans have lower average IQ than white people, whereas East Asians and Ashkenazi Jews have higher average IQ; the differences between the group averages are comparable with the standard deviations within each group; it’s not yet fully clear how much the differences between the group averages are due to genetics and how much they are due to socio-economic conditions”, at least outside liberal arts departments.)
For the last year or so, I’ve been thinking that a “real” (read pre-WW2) democracy is not just bad but very much right-wing (see Corey Robin’s writings on libertarianism, “democratic feudalism”, etc).
Like some other nebulous concepts, e.g. multiculturalism, I see it as grafted onto the “real” corpus of Left ideas—liberty, equality, fraternity, hard-boiled egg, etc—as a consequence of political maneuvering and long-time coalitions, without due reflection. Think of it: today the more popular anti-Left/anti-progressive positions are not monarchism/neo-reaction/etc but right-wing libertarianism and fascist populism, which often invoke democratic slogans.
Like Konkvistador already said a few times, eugenics started out as a left-wing/progressive movement, and many old-time progressives—including even American abolitionists—were outright racist.
(Metacontrarianism, hell yeah!)
They are also practically non-existent in right wing parties in the West. While being contrarian is a bad sign, getting people from all mainstream political positions to go into sputtering apoplexy with the same input can be a good sign.
I dunno, 2 and 3 seem like things I’d expect the right-wing to believe (though probably with less nuance) in America (not to say they wouldn’t go into sputtering apoplexy if you said certain formulations of those ideas out loud and there was a camera nearby). And who was calling for revolution after the recent election? (tongue somewhat in cheek there)
This might be true of 3 perhaps, but is not for 2.
I’m not sure the link proves your point.
Derbyshire’s firing wasn’t a show for public consumption but a genuine rebuke from the National Review establishment caused by ideological differences.
It was also more over the top than just claiming that race and important traits are correlated.
Maybe. There is still undoubtedly a strong racist component to the right-wing belief melange.
But perhaps we’re arguing semantics. I meant that the belief in question is something that would be associated with the right wing (due to said component), something that would be argued, with not-insignificant frequency, covertly by public figures and publicly by private citizens of that party, not that it’s something a majority of right-wing-identified people would assent to, privately or publicly. Is that unfair?
Considering that any mostly white gathering of Americans is at risk of being called racist until proven otherwise I’m not at all impressed at all by this observation. How would you differentiate the world with racism present beyond the background noise among Republicans and one where it is overrepresented?
Republicans could adopt any possible set of policy proposals they like, the opinions of their voters likewise could change to anything but as long as their voters retained the colour of their skin they would still end up being called racist at least occasionally.
Not really. Private citizens of the party arguing for such things publicly are generally quite rare. If the case where different why are the examples of racism among the republican base presented by the media so terribly feeble? The “racism” of say the Tea Party which was presented as this incredibly dangerous far right fringe movement, is not worth being called that at all.
I do agree some public figures probably do still in private hold such opinions.
If you assume that people of color are at least moderately competent judges of their own interests, then the Republicans not attracting people of color should at least be weak evidence that Republicans show prejudice.
It’s weak evidence because there are other possible explanations—perhaps people of color don’t like feeling so strongly outnumbered so there’s a stable situation which isn’t a result of white racism.
It is also weak evidence that the Democratic party is offering non-white voters benefits beyond those they offer to white voters. A model of a race preference Democratic party and a race blind Republican party works just as well as long as you assume Democrats are slightly anti-white.
See Thomas Schelling’s Models of Segregation
Or that Democrats show prejudice in the other direction and thus become more attractive… I guess many Republicans would prefer that explanation :-)
So you believe that racism is not alive and well in modern America and American politics?
You don’t think that the “birther controversy” was racist in nature? You think this whole thing is a coincidence? You think this type of thing doesn’t happen? You think this is a complete fabrication?
This seems like a complete failure of critical thinking.
If this isn’t what you’re saying, could you say plainly what it is you believe and why?
No.
Racists do not generally like Obama because he has an African father. This is incredibly surprising? Do a google search for racist epithets and say Condoleezza Rice.
Are you really saying that if Democrats had a white candidate on election day 2016 and Republicans a black one, you wouldn’t find the appropriate slurs online?
Of course it does. Do you have evidence it happens often enough to be a concern rather than pointing out anecdotes? Also what has this got to do with Republicans rather than American society as a whole?
No why would I? A mere two decades before that there was basically a new splinter party just on the segregation issue.
I will say the same. You seem to have utterly failed at the principle of charity if not outright straw manned my position.
I am not an American. I’m not a Republican. Yet the examples you picked seem to target a CNN caricature of a Fox News viewer. Why did you pattern match me to that? That this post got up voted to 3 karma before my response suggests filling a post with solid partisan digs can be a good way to gain votes on LessWrong and is an indication of us suffering a lack of diversity.
I believe most people in America and the West have on a conscious level an irrational aversion to racism two or three orders of magnitude out of proportion to the actual utilitarian damage it causes. This is partially caused by virtue signalling spirals.
The opportunity costs of this are non-trivial.
Epithets. An epitaph is something else.
Thank you for the correction! I’ve put the words in my Anki deck. English is not my native language so I often make mistakes, please if you ever spot an error don’t hesitate to comment or PM me.
If I were interested in racist epitaphs, I’m not sure I could find them on Google.
Googling for “racist epitaphs” as was suggested actually mostly turns up articles on racist epithets.
Googling for “racist epitaphs -epithets” also mostly turns up incorrect references to racist epithets, perhaps unsurprisingly.
”racist epitaphs -epithets tombstone” turns up a bunch of stuff unrelated to racism.
This seems… well, nonsensical, to be honest. Either there’s a typo somewhere in it or I’m completely failing to get the point. Would you mind clarifying?
“is” was a typo.
Yeah, I’d assumed that one. Still, about the only way I can make sense of the line is if I assume you mean Afrocentric racists, and those are… well, actually not vanishingly rare (aside from some lexical quibbles on “racist”), but certainly rarer than the Eurocentric kind and pretty clearly not what was being discussed in context.
The second sentence was sarcasm.
That would explain it.
Yes, it’s amazing how easy it is to find evidence of racism when you’re willing to claim things are secretly motivated by racism with no evidence.
Wow, there appears to be one twit a day that uses both the words “Obama” and “nigger”, and almost half of those appear to be pro-Obama twits using “nigger” ironically.
Given the correlation between race and crime, I don’t see your point.
Since you appear to be relatively new to LW, let me point out that this kind of ad hominem is completely inappropriate on LW even if it didn’t follow laughably weak arguments.
What exactly do you think motivates the birther movement? It’s pretty clearly irrational belief.
Why isn’t “politics is the mindkiller” sufficient?
First, the claim is incredibly unusual compared to standard political irrationality. The last time this claim occurred was against Chester Arthur (US president 1881) - and John McCain (who was born in the Panama Canal Zone, outside the sovereign territory of the US) received no challenges to his eligibility for office.
Second, the ratio of intensity to plausibility is much higher that most American political irrationality—the long form has been released, after all. This suggests there’s more than ordinary political mindkiller at play.
Why isn’t “Politics is the mindkiller” sufficient for people to believe Obama is a space alien? Or for people to believe that John McCain isn’t a natural born citizen, particularly given that he wasn’t born in the US? “Politics is the mindkiller” isn’t and shouldn’t be able to account for just any negative belief that people hold about a candidate they don’t believe in.
Also, I find the birther controversy weird because I think some laws matter a great deal more than others, and the natural born citizen rule doesn’t serve any important purpose that I can see.
I asked a birther what he expected to happen if it turned out that Obama was proven to have been born in Kenya, and he hadn’t even thought about the question, which probably implied that few if any birthers had thought about it either, or they would have been discussing it. I’m not sure this proves anything about racism, but it’s evidence that there was something weird going on.
There’s a danger in this being too broad an explanation, such that it doesn’t actually explain anything. In this context, some ideas are so extreme, that simply using that explanation seems potentially insufficient. That said, while racism may be in play, there’s some evidence otherwise that what is going on here is a combination of politics is the mindkiller with people who are already less connected to reality than others. Thus, for example, this probably explains why Alan Keyes was a birther (racism seems like it isn’t likely to be relevant given that Keyes is black).
But at the same time, there’s definite evidence for something other than just politics as the mindkiller. In particular, although some on the left did pick up on the birtherism early on (for example Phillip Berg), it didn’t spread to the left-wing opponents of Obama in any substantial fashion, like it did to those on the right. In that context, the fact that Republicans in general are more racist seems robust. (For example, Republicans are much more likely than Democrats to have a negative opinion of interracial marriage or think it should be outlawed 1). And the possibility of a causal relationship has to be at minimum a located hypothesis. On the other hand, there are other possible hypotheses, such as the tendencies in the last few years for self-identified conservatives and Republicans to turn to their own news sources. While this occurs on both ends of the American political spectrum, it seems especially in the context of the last election and the response to people like Nate Silver and Sam Wang, that in the last two years it has occurred more on the right-wing. Moreover, it doesn’t actually need to be more prominent on either side of the political spectrum to have had this sort of effect.
Overall, politics as mindkiller seems unsatisfactory in this context, but racism is definitely not the only possible other causal factor at work here. It seems likely that a variety of different factors are at play, and deciding how strong any given one of them is may be very tough.
I think xenophobia is at least as likely a motivation as racism, considering that his father wasn’t a U.S. national, and he spent some of his formative years in Indonesia. People accuse him of not being a natural born citizen because they’re specifically afraid that he’s too foreign.
What is the distinction you are attempting to draw between xenophobia and racism?
People may easily regard people of different races from their own country as being part of their cultural in-group, where people of different countries, particularly ones like Kenya and Indonesia which aren’t Western first-world nations, are cultural outgroup members.
I feel like we are having an unintentional definitional dispute. In the US political realm, the essence of the accusation of “racism” is unjustly treating others as cultural outgroup.
I think that’s an inappropriate inflation of the term, since under that definition a person could easily be “racist” against members of their own race who have different cultural backgrounds, but not against ones who don’t. Racism is a basis for unjustly treating others as outgroup members, but it can only lower the quality of our discourse if we describe all cases of unjustly treating others as outgroup members as racism.
If I understand your distinction correctly, the irrational hostility of the Californians to Oklahomans during the 1930s Great Depression is xenophobia and not racism. I guess I’m having difficulty coming up with examples of irrational / hostile racism that isn’t xenophobic. What exactly is the goal of the distinction you are making?
Well, the narrower definition of xenophobia is a fear of people from other countries. If one interprets “a fear of that which is perceived to be foreign or strange” broadly enough, then all racism is xenophobia, but all xenophobia is certainly not racism.
The point of the distinction I’m making is to set out a class wherein people could be expected to mistrust Obama for having a Kenyan father and having spent a number of his childhood years in Indonesia, but not to mistrust or be unwilling to vote for a person of the same racial heritage who was born and raised in their own neighborhood to parents who were both American citizens.
For a broad enough definition of racism, I don’t doubt that most birthers are racists; the Implicit Association Test suggests that most people have some degree of racial bias. But I do think that the fact that Obama is half Kenyan and spent some of his time growing up in Indonesia has much more explanatory power with respect to the birther controversy than the fact that he falls into the demographic category of African American.
To clarify, I don’t think we are currently having a substantive disagreement. That is, I think we both agree that the continued strength of the birther movement is an expression of some people’s belief that Obama is outgroup and the predictable irrationality that follows from that conclusion.
That’s what most people in the US mean when they talk about the problems of racism. If I could persuade everyone to just call this process “Othering” without specific reference to race or sex or nationality or whatever, I’d consider it.
There’s nothing wrong with trying to show that colloquial usage is misleading. Better definitions can often lead to clearer analysis. You are suggesting that mis-usage of racism is confusing the analysis, but I don’t see how. Some of that impression comes from my sense that the birthers wouldn’t vote for Obama even if he were born and raised in Chicago.
“Othering” is broad enough to encapsulate the phenomenon, but also broad enough that it doesn’t narrow down the prejudice under discussion. I’ll admit to also having a kneejerk dislike of any use of the term, since I’ve read and have an abysmally low opinion of the work of the author who popularized it
I don’t doubt that birthers mostly wouldn’t vote for Obama even if he were born and raised in Chicago, but that’s because I suspect that there’s an extremely strong overlap between that level of xenophobia and people who’re socially conservative enough to not want to vote for him on a policy basis.
Birthers are probably mostly racist, by broad enough definitions of racism, and they are certainly almost all conservative, but that doesn’t mean that their racism or their conservatism are the best explanations for their being birthers.
If a person opposes a specific government policy, and another person argues “this person just opposes the policy because they’re rich,” when the person who opposes it is a rich libertarian, and the policy is opposed by almost all libertarians, but mostly not by rich people who’re not also libertarians, then “opposes the policy because they’re rich” is a bad explanation.
I agree with all of that. I just don’t understand what non-othering irrational racism is.
Edit: Due to insufficient background, I can neither defend nor attack Said, but my sense is that Othering and irrational outgroupism are essentially the same phenomena. At the very least, irrational outgroupism is a very good steelman of Othering.
I’m not arguing that there is non-othering irrational racism, and even if there is I wouldn’t be arguing that it’s relevant to the issue under discussion (now that I think of it, there probably is in the form of self-hating racism, where the group one is prejudiced against is “us”,) but there are also non-racism forms of othering, or outgrouping, and I think that racism is not the most salient issue of prejudice in this matter.
Fair enough. I think some of the problem is that colloquial language lacks the technical vocabulary to communicate the issue precisely. For example, I think the common usage of xenophobia and racism is not a natural kind, and othering captures the insight that colloquial usage is generally aiming for when it says “racism.” Given that, I think “birtherism is racism” is about as accurate a colloquial phrase as we are likely to meet—as intended, that phrase doesn’t agree with your point, despite its imprecision.
The usage I’m suggesting helps clarify the distinction between outgroupism and the personal issues embedded in “self-hating” racism. But it is technical vocabulary that has not yet spread into common usage. I don’t think the lack of technical vocabulary indicates an unusual level of confusion on this issue.
Most importantly, pushing the point masks fundamental agreement between you and others like JoshuaZ or TorqueDrifter.
You will find very few people in mainstream right wing parties arguing for these three things too (except perhaps in a very small way the last one).
Can you please elaborate on what you meant by this? The way you said it made me feel rather uncomfortable.
I wasn’t intending to make you feel uncomfortable. On the other hand, I don’t think dark arts require a lot of intent.
Anyway, I believe that anti-racism/some parts of current feminism are an emotionally abusive attempt to address real issues.
Most of the anti-racists here have not been abusive, but imagine a social environment where this is the dominant tone.
The emotional abuse leads to a lot of resistance and avoidance, but the issues being real has its own pull.
I’ve seen people (arguably including me) who were very unfond of the emotional abuse still come to believe that at least some of the issues are valid and worthy of being addressed. What’s more, I’m reasonably certain that at least some of those people don’t realize they’ve changed their minds.
I don’t know where you personally will end up on these issues (it wouldn’t surprise me if the discussion of gender prejudice brings in substantial amounts about racism and possibly ablism), but I expect that LW will be taken pretty far towards believing that (many) men mistreat women in ways that ought to be corrected. It wouldn’t surprise me if (this being LW) there will also be more clarity about ways that women could and should treat men better.
Lessening Inferential Distance is only the first post in a series. I’m expecting that harder issues will be brought up in later posts.
I believe that, with your linked comment getting 32 points, you are making Nancy rather uncomfortable in turn.
I’m fairly certain that we’re all suffering from the hostile media effect; e.g. you keep saying how there’s creeping censorship of right-wing ideas on LW, while I’m disturbed by such complaints getting karma and support :)
Consider the way this post was down-voted, along with some of the discussion, particularly here, as exhibit A.
OK, I’m considering it. How does it indicate creeping censorship of right-wing ideas on LW?
I neither upvoted nor downvoted that post, so my guesses at the motivations of downvoters shouldn’t be trusted too far, but my guess is that mostly it was downvoted because, while it was ostensibly about a technique of rationality, (1) what it said about that technique was mostly very obvious, (2) a big chunk of the article was devoted to the discussion of an entirely different topic with considerable mindkilling potential, and (3) this gives some ground for suspicion that the rationality-technique discussion served largely as a pretext for airing the author’s views on that topic. (A topic that others in the past have been curiously enthusiastic to air similar views on.)
Having said all that, I’ll add that in fact I don’t think it likely that MTGandP is a racist or that s/he wrote that post in order to bolster racist ideas, and I think that if anyone downvoted that post because they wanted to discourage a nasty racist (rather than, e.g., to discourage other people who are nasty racists from posting similar stuff) then they made a mistake. But the point is that the downvotes don’t look to me like censorship of right-wing ideas; they look to me like some combination of (1) finding the post unenlightening and (2) seeing it as promoting racism.
As for the “discussion, particularly here”, again that doesn’t look to me at all like censorship of right-wing ideas, nor like people arguing for the censorship of right-wing ideas. It looks to me like one person apparently thinking that racism has gone away and other people objecting that no it bloody hasn’t. (Exception: the very first comment in the thread you linked to says, roughly, “race is a needlessly contentious thing to discuss to make your point”, which (1) is true if the point is what MTGandP says, rather than that being a pretext for talking about race, and (2) doesn’t constitute any sort of attempt at censorship, as opposed to advice that some topics are likely on the whole not to produce helpful discussion.)
Incidentally, I notice that some people in this thread are insisting that there’s nothing particularly right-wing about believing in racial intelligence differences, whereas the only thing I can see to link the downvoting of the post you linked to with “right-wing ideas” is its defence of (discussing the possibility of) racial intelligence differences. Curious.
See my comment here for why I think the example was appropriate. Furthermore, the way you’re throwing around the term “racist ideas” suggests you are also making the mistake the post describes with respect to the example given.
You might have missed the part where AndrewHickey says:
Depends on what you mean by “right-wing”. It’s certainly true that there are currently a number of left-wing people who believe that discussing race and intelligence is morally unacceptable.
There’s also a number of people who think there are bad intelectual confusions in every race-intelligence comment they have ever seen.
I think it’s interesting that you keep changing the subject from “what propositions Greens believe” to your beliefs about “what topics Blues think are morally acceptable to discuss”. It comes across that you’re trying to make some sort of deeply subtle point about what beliefs you think it is morally acceptable to believe you have about Blues.
I was just trying to explain what Konkvistador probably meant by that statement.
Why?
No, I didn’t miss it. I don’t see any attempt at censorship there; I see someone saying: you appear to be ignorant about X, and in view of that you would do better to leave the subject alone.
No, I don’t think it does. Because so far as I can see there is nothing else about the post, or the votes it got, or the ensuing discussion, that anyone would consider an instance of “creeping censorship of right-wing ideas”. Given that you cited it as an example of that, I can only conclude that you consider belief in racial intelligence differences to be a “right-wing idea”. My own understanding of the term “right-wing” doesn’t come into it, unless there’s something else in the post that’s distinctively right-wing; did I miss something?
Because you’re using “racist” as a property of an idea independent of its truth value that lets you dismiss it.
Well, especially on LW, the normal response to ignorance is to help educate the person being ignorant rather than to attempt to dismiss him as quickly as possible.
Furthermore, the statement is more like “you said something that could be stretched to imply you are don’t know X (where X is itself a highly politicized claim whose truth value is a matter of political dispute) that means you are too ignorant to even say anything about the topic”.
What idea do you think I’m doing that to?
(It seems clear to me that there are ideas that can reasonably be described as “racist ideas”. For instance, the idea that black people are fundamentally inferior to white people in abilities, character, and personal value, and that this means they should be segregated to keep them out of the way of superior white people. Or the idea that the right thing to do with people of Jewish descent is to put them into concentration camps and kill them en masse. So if you’re saying that merely using the words “racist ideas” is proof of error and confusion, I think that’s wrong. On the other hand, if there’s some actual idea you think I’m wrongly describing that way, then let’s hear what idea that is.)
I’ve seen both quite often.
But let’s suppose for the sake of argument that (1) Andrew Hickey was in fact intending to dismiss MTGandP as quickly as possible and to get him (note: actually I have no idea whether MTGandP is male or female; indeed the name rather suggests a collective) to drop the subject, and that (2) such behaviour is very atypical on Less Wrong. What then? How does this indicate “creeping censorship of right-wing ideas”?
The most it indicates, being as uncharitable as possible to AH, is that one person (AH) is trying to intimidate another person (MT) out of talking about an idea that AH considers racist. How do you get from “AH tries to intimidate MT out of talking about the idea that black people might have inferior intelligence” to “LW exhibits creeping censorship of right-wing ideas”? No one was censored. There was no deluge of people agreeing with AH and telling MT to shut up. The idea in question isn’t, at least according to others in this thread who appear sympathetic to “right-wing-ideas”, particularly a right-wing one anyway.
In the ancestor you wrote:
What work is the word “racist” doing in that paragraph that couldn’t be better done by the word “wrong”?
The fact that MT’s post is at −7 and AH’s comment is at +4 rather than the other way around suggests the problem isn’t limited to AH.
The word occurs several times in different contexts; I take it (from what you’ve said elsewhere here) that you’re referring to the instance where it prefixes “ideas”. The work it’s doing that couldn’t be better done by “wrong” is specifying the particular variety of allegedly-wrong ideas I’m saying I think MTGandP isn’t trying to promote.
… indicates that there are some other people who think MTGandP’s post wasn’t very good (which might be for many reasons), and that there are some other people who agree with AH (which also might be for many reasons).
I repeat: How does any of this amount to “creeping censorship of right-wing ideas”? What specific right-wing ideas? How are they being censored?
The comment you are referencing was written in disappointment over a discussion with hundreds of posts and a Main level article at 50+ karma.
I think this may be true to an extent, but this isn’t my perception alone, several LWers have complained about this in the past year or so.
What disturbs you about this specifically?
Like I already said a few times, nearly all the highly upvoted posts and comments that explicitly bring up ideology—like yours—appear to come from the right. Duh, you’ll say, if most of the LW stuff is implicitly liberal/progressive, then of course what’s going to stand out is (intelligently argued) contrarianism. But the disturbing thing to me is that the mainstream doesn’t seem to react to the challenge.
What I have in mind is not some isolated insightful comments e.g. criticizing moldbuggery, defending egalitarianism or feminism or something like that—they do appear—but an acknowledgement of LW’s underlying ideological non-neutrality. E.g. this post by Eliezer, or this one by Luke would’ve hardly been received well without the author and the audience sharing Enlightenment/Universalist values; both the tone and the message rely on an ideological foundation (one that I desire to analyze and add to—not deconstruct).
Yet there’s not enough acknowledgement and conscious defense of those values, so when such content is challenged from an alt-right perspective, the attacking side ends up with the last word in the discussion. So to me it feels, subjectively, as if an alien force is ripping whole chunks out of the comfortable “default” memeplex, and no-one on the “inside” is willing or able to counterattack!
The thing is right wing thinkers who end up on LessWrong and stay in the community should be comforting to you, these are the people who believe engaging in dialogue and common goals is possible. And I would argue they empower all members of the community by contributing to the explicit goal of refining human rationality or FAI design (though they might undermine some other implicit goals).
Compare this to the idea of right wing thinkers that take what they can from rationality and the alt right and then seeing they are not accepted in the nominally rationalist community leave for the world. Even as individuals that should concern you, but imagine a right wing community forming powered by the best tools from here. Somehow it seems its left wing only counterpart would be weaker.
The question is, how much do they contribute to the “value-neutral” goals like epistemic rationality/practical knowledge/whatever, versus the disutility that I suffer by them succeeding at their values—and perhaps getting to influence the future disproportionately, if LW/SIAI achieve a lot and give leverage to all participants? Extreme right-wingers all seem to share the explicit values of institutionalized dominance, rigid hierarchy, rejection of universal ethics and the suppression of any threat to such an order.
For example, you’ve quoted Roissy around here before as a good instrumental rationalist and worthwhile writer—and, say, Hanson links to him, and Vladimir_M endorsed him—yet I think that he must’ve already caused enough misery with his blog and his personal actions, never mind whatever political impact his vile thoughts might have. I don’t think that our community should be willing to cooperate or communicate with thinkers like him. At all. And he’s small fish compared to the intellectual currents that might appear if the “Dark Enlightenment” grows some more. I have pondered where those ideas might lead, and it fills me with equal part horror and rage.
...
If this movement indeed has potential for growth, I wish for a broad cordon against it, from academic liberals like Corey Robin to far-left writers like Matthew Lyons to LW-style progressive technocrats.
You are too quick in ascribing incompatible values to people you disagree with. That’s the cheap way out; it allows you to write off their opinion without considering the fact that they might have the same terminal values as you, and arrived at their instrumental position for rational, empirical reasons. Then you’d have to actually consider whether their position is correct, instead of just writing them off.
This is the straw-man version you get taught about by the Universalist establishment. Don’t take it seriously as what these folks are actually thinking. Some people are just dumb and evil, and most confuse “this is instrumentally a good idea” with “this is terminally a good idea” but there’s less of them than you are taught, and there actually are good reasons for the apparent craziness.
It is perfectly possible for someone to have the same values as you and consider (the non-straw) version of those things to be instrumentally a good idea.
I don’t know what you are thinking but I know that feel. I had that same feel just a few months ago. I used to look at authoritarians, racists, PUAs, and such and think. “what the fuck is wrong with these people? How could they be so wrong? Are they evil?” mostly I just felt that horror and rage though.
The truth has a certain ring to it. I first noticed that truthiness with LW; “wow, these guys get thinking right”, then a while later, with MMSL (married PUA) “Wow, this stuff is totally different from what we’re taught, but it works (on my wife)”. Then with do-ocracy, and authoritarianism “wow this just works so much better for meetup organizing”. Then with HBD, when I realized that I could build an acceptable line of retreat in the case that the racists were right on the factual questions.
And then, to quote moldbug: “for a wide variety of controversial issues, it would be very, very easy for any smart young person with a few hours to spare to see what the pattern of truth and error, and its inevitable political associations, started to look like.” That is, the “Dark Enlightenment” convinced me, a former hardcore anarchist.
So please, please consider that your enemies are not evil mutants. That people might reject democracy, and accept dark enlightenment ideas for actual good reasons, not just because they have magical “incompatible values”. Please, please consider that you may not have all the facts, and that you may end up changing your mind on some of these issues.
Please don’t. What if you’re wrong? How will you realize your error if you put in hard blocks against certain ideas?
In response to your concerns, I ask one very specific thing of you. Please go and re-read Three Worlds Collide. Right now.
Nitpicks:
I reject it too. So?
Anarchist more like Bakunin or Durruti, or more like Rand? If it’s the latter, then your statement is remarkably unsurprising. So much of this is just the logical development of right-wing libertarianism.
WTF are you talking about. Just above, I was complaining how the “Universalist establishment” is silent even on the existence of the alt-right. In particular, it’s pigeonholing all opposition as either Strawman Christian Fundamentalist, Strawman Arrogant Capitalist or Strawman Racist Hick. Corey Robin’s polite and respectful, diligently researched work, The Reactionary Mind, got savaged by the NYT. If the goddamn New York Times is not the Pravda of the mainstream “Universalist establishment”, I don’t know what “establishment” we’re talking about at all.
One possible development of right-wing libertarianism. Specifically, what happens if you attempt to coherently extrapolate libertarian maxims, forgetting the original reason for stating them.
This is actually a common general failure mode, one starts with an ethical injunction and notices that it contains a term, X, that is vaguely defined. Rather than thinking about what definition of X would make the injunction make the most sense (which is admittedly dangerous with ethical injunctions) or treating the definition as a Schelling fence, one attempts to formulate a coherent definition of X that turns out to be very different from the one in use when the injunction was being formulated. In the extreme case one might conclude that X includes everything or nothing.
For example, libertarians believe that private parties should be free to do as they wish. Moldburgians extend the definition of private parties to include governments. (Edit: Disclaimer: I have read very little of Moldburg’s writings so this might not be an accurate description of his position.)
You’re own position, if I understand it correctly, suffers from a similar mistake. Specifically, you take the maxim “It is wrong to hold someone responsible for something that’s not his fault”, and narrow the definition of “fault” until nothing is ever anyone’s fault.
Very good general point. This post by John Holbo is an examination of this “slippery slope towards absolutism” that libertarianism is in the risk of falling through. Holbo is a liberal and part of his goal is to score points against libertarianism, but I think he is on to something.
I don’t think, however, that this is an accurate description of Moldbug’s failure mode. The “family resemblance” of his doctrines with libertarianism is not through an ethical injunction of formal liberty to dispose of property, extended to governments. It is rather through a cluster of empirical and empirical-ish right-wing beliefs (government regulation is corrupt and inefficient, Austrian economics is correct and Keynesianism is nonsense, liberal policies on crime are abject failures, etc). His ultimate terminal goals seem to be social order and the minimization of conflict. These lead to the rejection of democracy and its replacement by an all-powerful absolute government as the best way to eliminate both crime and the inefficient jockeying of factions for political power; then the libertarian faith in efficient free markets provides trust that (a) a “patchwork” of such states would be enough to prevent abuses, though competition and right of exit, and (b) within each state, the government will adopt broadly libertarian policies as the way to maximize prosperity to be able to extract the Laffer maximum in taxes.
So instead of starting from libertarian values and developing them in a different direction, his system starts with a very different value and develops it in a direction at ends up close to anarcho-capitalism.
Good description, but I think that Moldbug’s ideology also has a “hidden” arational/romantic side, although it’s simultaneously a technocratic one—a Randian aesthetic of sorts, crossed with a Roman-style cult of mastery and dominance. Consider his hero-worship obituary for Steve Jobs, and compare it with Corey Robin’s enlightening examination of Joseph de Maistre. Both of them praise and admire above all competition, victory, fiercely defended supremacy, strength through ruthless adversity, control.
M.M. talks about “social order” and “minimization of conflict” not just because he wants to maximize hedonic utility for humans or something generic like that. Rather, he wants a certain mode of existence, where a technocratic system—a crowdsourced monarchist AGI of sorts—will actively seek out and ruthlessly destroy every disruptive element, every irregularity, every bug—and then continuously apply economic and political coercion to prevent further disturbance. He deeply and sincerely wants the paperclips to run on time. Consider these posts on the link between the engineer/tech-geek mindset and fundamentalism/authoritarianism/far-right radicalism.
Please believe me when I say I know how all this feels from the inside. I fear this mindset in others because my own brain can run it and I find the effects unacceptable. (I wouldn’t hesitate to proselytize for e.g. forced total wireheading or a bloody world revolution—if it was the only way to avoid this future.)
I read it again a few weeks ago, does that count? What are you getting at?
More like Bakunin, but I never really followed any school of thought.
I apologize for reading you out of context.
As someone who finds alt-right ideas interesting to read about and discuss, but is at the end of the day a conventional mainstream liberal, the advice I’d give you is: you should chill out.
Discussion of political topics at this site, as at Moldbug’s and other related ones, and also the vast majority of blogs and sites all over the political spectrum (with the possible but tiny exception of a handful of blogs connected to the D or R party apparatus or to insiders affecting government policy decisions) is essentially mental masturbation, something that will not affect in any way the future of humanity. It is just a way to pass the time some find interesting, as others prefer solving Sudoku puzzles or pondering Newcomblike problems.
Your feeling that a group of ideological “outsiders” who don’t share your values is growing in influence, and might take over if they are not “cordoned” and lead to some horrible catastrophe, sounds like the kind of feeling appropriate for a small hunter-gatherer tribe where if a dozen or two enemies of you join forces and take over you will have a very bad time. It is not appropriate for the objective situation of a forum with several thousand people, and much less for a country of 300 million people or a humanity of 7 billion people. The future of the world, even the future of LW, is not going to be shaped by the occasional crypto-racist (/sexist/fascist/etc) posts of a handful of people.
Sufficiently bad government can make a large difference, so it’s not irrational to oppose bad ideas. On the other hand, most bad ideas don’t get a chance to take hold. And on yet another hand, if you don’t like something, it’s very tempting to evoke the worst possible consequences and make them seem as vivid as possible.
Sure, it is reasonable to oppose bad ideas and to worry about worst-case scenarios. But when these are objectively low-probability, the reactions of “horror and rage” seem disproportionate.
Konkvistador, maybe you would mention your recent… little incident? (If no, then sorry, never mind.)
Many in the rationalist community are also part of the memetic cluster of the “Dark Enlightenment”. Moldbuggians, PUAs and HBDers are noticeable and seem to be participating in good faith on this forum, making various contributions while being mostly tolerant and polite to those of differing views. I argue this kind of ideological diversity and cooperation is vital to the goals of this community.
Again your post causes me to pause in concern. We don’t see many arguments on LW calling for a wide political coalition to disband and attack The Cathedral, which I think I could make quite convincingly if I wanted to here. The way well meaning people would understand and implement your call would lead to my own exclusion and that of others such as Vladimir_M.
Should those like me be hanging out in Roissy’s comment section rather than here?
Konkvistador, you were deep in the Enemy’s counsel! Tell us what you know! Do you really believe that they are all like Derbyshire, merely doomsaying and wallowing in bitterness? Their numbers grow by the hour; they will first be encouraged by this, then emboldened, then they will gather every single forbidden idea, every scrap of dark knowledge, and put Universalism to the test.
They profess scorn of all dreams and utopias, yet they have their own desire—Pronomianism, a stable world, safe for domination and slavery, where the strong are free from restraint and convention and the weak are free from choice and autonomy. They know where they want to go, they know their enemy, they do not fear for their feelings, conscience or sanity. Mainstream Universalism has only sheer numbers and inertia against these force multipliers.
I believe that we ought to strike as soon as possible. Few on the Left are alerted and concerned yet—but people like Land probably don’t expect a counterattack until much later, and surely don’t expect it to come from outside the Cathederal. Isolate them epistemically while they’re still few, attack their values as evil and dehumanizing, drive them into a phyg-like structure that would be bad at growth. So, what else can be done?
You underestimate universalism. It has adapted before. Recall that the Cathedral is a warm body machine, a belief pump. The victory of Democracy in the age of conscription and the printing press was no fluke. So as long as human minds by the billions can be thrown into the gears of war its complete defeat is unimaginable. What you must defend is not the ideology but the strategy. So clearly in order for this strategy to be viable you have to burn the mutant, kill the xeno and purge the heretics.
For the Emperor!
I see the “dark enlightenment” as a very minor force with little potential for growth, but one that intellectually seems a necessary correction to some of the mistakes of the first “enlightenment” that have metastasised over the past two centuries.
It won’t kill Universalism or even dethrone it, it might however create the happy state of affairs where the Cathedral’s theocratic nature is recognized as such and considered legitimate but people don’t take it too seriously. Like say the Anglican Church a century or two back.
Roissy is not dispensing any advice that goes beyond what is common in sexual cultures created by well meaning universalists in the inner city and lower class. Philosophers such as Nick Land may be scary in their style and thoughts, but their inquiry is following the tradition of Nietzsche and Schopenhauer. Bloggers like Moldbug are fascinated by the civilized aspects and achievements of Western civilization in the past more than its hierarchy. Their more scientifically minded members as say John Derbyshire (who you consider to have a grim heart) are rather reasonable. And stepping away from their atheist mainstream to their intellectual Christian faction? Do you even have a problem with those?
I grow more and more convinced that the dark enlightenment is a reformation of universalism rather than its abolition. Recall one of their favourite memes is fighting Lies and the search for Truth, a more Christian notion could not be found.
I’m not an expert on this by any means, but I always thought of that as a Christian syncretion of a Greek preoccupation. A lot of the more philosophical side of the historical Christian worldview got its start that way, and Aquinas in particular had a lot of Aristotle in him.
Yes I think this is correct, up voted. What I wanted to emphasize is that the received these particular memes almost certainly via Christianity, even if the religion wasn’t their origin. It is evidence in favor of them carrying other universalist assumptions and values from the same source.
....
This might fit my definition of “reason”… but what is noble or compassionate about it? How is Derbyshire preaching acceptance of inequality and submission to Nature different from the Catholic Church preaching acceptance of death and submission to God? If you think it reasonable to loathe death, why would you not loathe the genetic lottery?
I support Eliezer completely. Therefore I have to oppose Derbyshire unflinchingly.
This does not seem like submission to nature to me. I do not think he would object at all to say genetic engineering or eugenic programs aimed at reducing such suffering or boosting cognitive performance.
Derbyshire is asking us to please stop trying things that do not work and scapegoating those who aren’t responsible for misery inflicted by nature! I find it remarkable that you do not seem to grasp the moral relevance of avoiding scapegoating people at all! It is a terrible thing to look down on people and make them feel guilty and bad for something that is not their fault.
If you want to find nobility and compassion I say look here.
You read the article but did not understand it. Derbyshire stands where he stands intellectually because he can not do otherwise, no more than he can convince himself that there is a God. That is something I understand and sympathize with.
I will go further and say that he fears many of the same societal outcomes that you do.
But look, he demands that we accept it as a tolerable state of affairs! Eliezer says the opposite—yes, no particular person is to blame, but things are still horrible; we’re still living in a nightmare. To borrow from left-wing jargon again, I want a right to negativity here, a forceful statement that the default/normal/natural condition is awful, even with no-one to blame, and that there is an ethical imperative to ameliorate it.
Derbyshire’s article should have begun with “oughts”, his “is” statements might be true but they’re insufficient for humans. The fact that you being born e.g. black and in the slums and now you’re likely fucked and maladapted is no-one else’s fault does not mean that you are not entitled to scream, to express anguish. And dude, there’s a lot of anguish!
Taboo “tolerable”.
What ethical system are you using to make that assertion?
Eliezer is a utilitarian. Yes, it would improve overall utility to ameliorate this particular problem, there are also hundreds of other problems whose solution would also improve utility, and frankly by any measure of urgency or returns to effort, this one really isn’t even in the top 100.
Do you also believe that it’s a crime against humanity for God not to have given all humans (or even any humans) AGI-level intelligence?
Clearly, he means we should kill anyone who deviates from the average because they devalue the rest.
As an intellectual exercise, what would the Catholic Special Containment Procedure for ultra hazardous memetic materials look like applied to reaction.
Actually why wouldn’t ultra-traditional Catholicism work in such a role?
Or maybe the hour is later than you think Multiheaded.
Oh wow. It’s on! It’s officially on like Donkey Kong.
Wonder when they’ll put up something quotable, from Land or otherwise—maybe some “watchdog” far-left blog would be interested. (BTW some New-Left-y blog that looks at the aesthetics of materialist philosophy has been covering Land; unfortunately, the university jargon there is near-impenetrable.)
Well, that’s an expected precaution.
Ha-haah! Moldburg got Defoe confused with Swift. Fail!
no
Academia and mainstream political and philosophical tradition have no reason to engage what they don’t need to engage to maintain their position. The Dark Enlightenment is far from power or influence on society. If it demonstrates the ability to grasp it I am sure something like the counter-reformation will be brought to bare by the major established institutions against it.
I took away one thing from the Dark Enlightenment link—that it’s worth being shocked that cities have districts where the local culture makes it hard for people to live with each other. I don’t know whether his claim that first world Asian cities don’t have such districts is true.
As someone who recently realized that the default memeplex is in fact a memeplex and probably wrong, I think I have an idea for why no one on the “inside” counterattacks.
We don’t realize we are even in a memeplex that can be attacked. There’s no explicit defense of those values because they just feel like the way the world is; we don’t recognize them as values needing defending.
The standard universalist immune response is not calibrated to the alt-right, and doesn’t recognize it as hostile. Also, some of it is recognized and flagged as “idiocy; ignore.”
If we do recognize the attack, we have no canned response. It’s hard to get original thought out of people; much easier to get zombie slogan chanting.
I just realized though, that this explanation is entirely a rationalization. It might have no connection to reality.
They are. They just can’t come up with good arguments.
I think what is happening here is a bit more subtle than your summary suggests. First, many of the notions being proposed or discussed while in some sense “conservative″ are things like Moldbug’s ideas which while they do fall into one end, they aren’t in any way standard arguments or even issues. So people may simply not be able to raise effective arguments since they are grappling with approaches with which they haven’t had to think about before. Similarly, I suspect that all of us would have trouble making responses to arguments favoring say complete dissolution of all governments above the county level, not because such arguments are strong, but because we’re not used to thinking about them or constructing arguments against them.
Moreover, the meta-contrarian nature of Less Wrong, makes people very taken with arguments of forms that they haven’t seen before, so there may be a tendency to upvote or support an interesting contrarian argument even as one doesn’t pay as much attention to why the argument simply fails.
Finally, contrarian attitudes have an additional advantage when phrased in a political context: They aren’t as obviously political. The politics-as-mindkiller meme is very strong here, so a viewpoint that everyone recognizes as by nature political gets labeled as potential mindkilling to be avoided while arguments that don’t fit into the standard political dialogue as much don’t pattern match as closely.
Not sure about the last paragraph. People’s ideologies are part of the background to how they think, and political ideas that align with someone’s ideology can sometimes blend into that background without being registered. Contrarian ideas are less likely to blend in, and so more likely to be flagged by mainstreamers as political.
I think him being acutely aware of this possibility is what contributes to feeling under siege by scary aliens.
The fact that I don’t have time to write essays with the historical facts that Moldbug always seems to omit does not mean that I couldn’t.
(Although talk is cheap, so this post is not really a reason for anyone else to believe that).
Generally speaking the historical facts Moldbug omits are the ones most educated readers should be familiar with anyway.
We have seen posters motivated enough to engage in karmassasination of users making right wing arguments so this seems plausible. The weight of evidence certainly seems to be on the alt-right side quite strongly on several issues and has been building ever more that way for decades.
Yet the demographics of metacontrarianism however are something we should keep in mind. Perhaps people clever enough to construct on their own novel arguments rather than just picking them up from academia or mainstream political tradition don’t yet have much to signal by doing this. If 10 or 20% of LWers where conservatives and another 10% of reactionaries perhaps they would. For now though they stay in the alternative right wing camp where the fun ideas and displays of cleverness are to be had. I’m basing that number on anti-libertarian arguments being viable in our community.
It seems possibly relevant to point out that karma assasination has been occurring in the last few days to people of a wide variety of political viewpoints. For example the recent thread on women’s experience was reported as leading to multiple incidents of karma assassination to people espousing views classically labeled as feminist.
Engagement in karma fights probably doesn’t give much data about accuracy of beliefs or peoples confidence in their own belief structures.
When people talk about karma assassination, what tools do they use for keeping track?
All I’ve got is the count by my name and checking back for a few pages of my comments. I would like to get information about which comments have the most recent karma changes.
And not to be paranoid, but I think I had about 14 points go away a few days ago for no apparent reason. I’m not sure whether I misremembered my total, or someone found a bunch of comments they didn’t like, or it was just spite.
I don’t know of any tools per se. I suspect that people have a track of what karma most of their recent comments had, so they can simply check by looking there. There are also some subtle signs: For example, for most people the common karma on a comment is zero. So even if you don’t remember your karma (or if you are looking at someone else’s) and there are a lot of recent comments at −1 on a variety of different subjects that don’t look obviously bad, that’s a sign.
Edit: Over what time span was the 14 point drop?
Less than a day.
So that certainly sounds like karma assassination to me (assuming that you remembered the number correctly before hand). In general, karma almost always is increasing if one is a user in good standing, so on any given day, the variance will for most days probably be a question of how much it goes up by more than anything else. A drop of 14 in a single day in that context seems extreme.
If I see that almost all of the last 20 comments I published before yesterday at three o’ clock have 1 point less than they used to (including apparently unobjectionable ones such as me answering a relevant question), and almost none of more recent or more ancient comments do, then I guess something fishy is going on.
Libertarians count as right-wing by most left-wing standards, even far right. And then we’ve got a small but vocal faction of neoreactionary/Moldbugger types, who don’t fit cleanly into any modern political typologies but who tend to look extra-super right-wing++ through leftist eyes.
I was surprised as well, but I disagree that it is necessarily scope insensitivity—believing utility is continuously additive requires choosing torture. But some people take that as evidence that utility is not additive—more technically, evidence that utility is not the appropriate analysis of morality (aka picking deontology or virtue ethics or somesuch).
More specific analysis here and more generally here.
In support of this, 435 people chose specks, and 430 chose virtue ethics, deontology, or other.
That’s only weak evidence about the correlation between non-consequentialism and dust specking. If we had 670 consequentialists, 50 deontologists, 180 virtue ethicists, and 200 others, and 40% of each chose dust specks, we’d get numbers like yours even though there wouldn’t be a correlation.
I did a crosstab, which should be more informative:
I get different totals for the number of speckers (397) and non-consequentialists (386), though. Maybe my copy of the data’s messed up? (Gnumeric complains the XLS might be corrupt.)
Anyway, I do see a correlation between specks & moral paradigm. My dust speck percentages:
41% for consequentialism (N = 560)
67% for deontology (N = 36)
47% for other/none (N = 145)
65% for virtue ethics (N = 116)
leaving out people who didn’t answer. Consequentialists chose dust specks at a lower rate than each other group (which chi-squared tests confirm is statistically significant). But 41% of our consequentialists did still choose dust specks.
[Edit: “indentation is preserved”, my arse. I am not a Markdown fan.]
I think we’ve found our answer, then.ETA: Really nice work from satt to prove I was jumping to conclusions here.
Well, you cannot be totally sure. I for one would consider myself a consequentialist, but would still choose dust specks. Correlation doesn’t imply causation!
Well, I guess there are various forms of Consequentialism which would lead one to choose dust specks. That would simply depend on what you’re trying to maximize.
If you want to maximize things like pain, discomfort or the amount of dust in eyes, then yes, you would choose dustspecks.
If, on the other hand, you wanted to maximize the amount of, say, wellbeing, then the only choice available is torture.
It’s not clear to me that one can’t be a utilitarian without agreeing that utility is additive (at least, additive in that manner). Consequentialism makes way more sense to me than deontology or virtue ethics (i.e. “what’s the point of deontology or virtue ethics if it doesn’t give better results?”), but I not only remain completely unconvinced by the arguments for Torture (that I’ve seen, anyway), but also think that Eliezer’s choice of Torture contradicts some of his other posts. But this is probably not the place to have that discussion.
I think Torture vs Dust Specks is really just Eliezer being coy about prioritarianism, as an analogous issue not known by that name emerges from prioritarian maths.
Could you clarify what you mean when you say that Eliezer is “being coy about” prioritarianism?
As for me, I’d never heard of prioritarianism before; having just read the wikipedia article (which does have some style disclaimers and “citation needed”s, so perhaps is not the ideal source), I don’t think it addresses either of my objections. It does at least attempt to capture some of my intuitions about the Specks vs. Torture case.
It’s not exactly libertarian-dominated. More that there are far more libertarians here than in real life (and more socialists, too, likely as not. It’s the “normal” political positions that are underrepresented)
If you break down political orientation by country, you get around 50% socialists among europeans (which may be a bit higher than the population), and around 20% socialists among americans.
I suspect you’d see a higher percentage of libertarians if you restricted to non-lurkers, and even higher if you restricted by karma, or how often they post.
And they are exactly the non-write-in ones in the survey, except for New Zealand that was there and Poland that wasn’t.
New Zealand was 0.8, which is close enough to support your point IMO.
I didn’t do this myself because I didn’t trust my statistical ability enough, and I forgot to mention it on the original post, but...
Can someone check for birth order effects? Whether Less Wrongers are more likely to be first-borns than average? Preferably someone who’s read Judith Rich Harris’ critique of why most birth order effect analyses are hopelessly wrong? Or Gwern? I would trust Gwern on this.
I don’t know Harris’s critique, but here are some numbers.
Out of survey respondents who reported that they have 1 sibling (n=453), 76% said that they were the oldest (i.e., 0 older siblings). By chance, you’d expect 50% to be oldest.
Of those with 2 siblings, 50% are the oldest (vs. 33% expected by chance), n=240.
Of those with 3 siblings, 45% are the oldest (vs. 25% expected by chance), n=120.
Of those with 4 or more siblings, 50% are the oldest (vs. under 20% expected by chance), n=58.
Of those with 0 siblings, 100% are the oldest (vs. 100% expected by chance), n=163.
Overall, 69% of those who answered the “number of older siblings” question are the oldest.
Those look like big effects, unlikely to be explained by whatever artifacts Harris has found.
There are a handful of people who left the number of older siblings blank but did report a total number of siblings), or who reported a non-integer number of siblings (half-siblings), but they are too few to make much difference in the numbers.
This doesn’t seem to vary by degree of involvement in LW; overall 71% of those in the top third of LW exposure (based on sequence-reading, karma, etc.) are the oldest. Here is a little table with the breakdown for them; it shows the percent of people who are the oldest, by number of siblings, for all respondents vs. the highest third in LW exposure.
n all high-LW
0 100 100
1 76 80
2 50 45
3 45 51
4+ 50 62
That 62% is 8⁄13, so not very meaningful.
There seems to be a pretty big potential confounder: age. Many respondents’ younger siblings are too young to be contributing to this site, while no one’s older siblings are too old (unless they’re dead, but since ~98% of the community is under age 60 that’s not a significant concern).
You’re saying that if we randomly picked 22-31 year-olds, a disproportionate member would be eldest children? For that to work, there’d have to be more eldest children in that age-range than youngest. Given the increase in population, that is certainly plausible. You would expect more younger families than older families, which means that within an age range there would be a disproportionate number of older siblings (unless it’s so young that not all of the younger siblings have been born yet) but it doesn’t seem like it would be nearly that significant.
The fact that most of the respondents are eldest children is a confounder for this.
In that case, wouldn’t people over 60 also be too old?
Can somebody redo the analysis by controlling for age?
I don’t know anything about birth order effects, sorry.
I had no knowledge of such a survey. These might be more efficient if they were posted in a blatantly obvious manner, like on the banner.
IQ Trend Analysis:
The self-reported IQ results on these surveys have been, to use Yvain’s wording, “ridiculed” because they’d mean that the average LessWronger is gifted. Various other questions were added to the survey this time which gives us things to check against, and the results of these other questions have made the IQ figures more believable.
Summary:
LessWrong has lost IQ points on the self-reported scores every year for a total of 7.18 IQ points in 3.7 years or about 2 points per year. If LessWrong began with 145.88 IQ points in May 2009, then LessWrong has lost over half of it’s giftedness (using IQ 132 as the definition, explained below).
The self-reported figures for each year:
IQ on 03/12/2009: 145.88
IQ on 00/00/2010: Unknown*
IQ on 12/05/2011: 140
IQ on 11/29/2012: 138.7
IQ points lost each year:
2.94 IQ point drop for 2010 (Estimated*)
2.94 IQ point drop for 2011 (Estimated*)
1.30 IQ point drop for 2012
Analysis:
Average IQ points lost per year: 1.94
Total IQ points lost: 7.18 in 3.7 years
Total IQ points LessWrong had above the gifted line: 13.88 (145.88 − 132*)
Percent less giftedness on the last survey result: 52% (7.18 / 13.88)
Footnotes:
* Unknown 2010 figures: There was no 2010 survey. The first line of the 2011 survey proposition mentions that.
* Estimated IQ point drops for 2010 and 2011: I divided the 2011 IQ drop by 2 and distributed it across 10⁄11.
* IQ 132 significance: IQ 132 is the top 2% (This may vary a little bit from one IQ test to another) which would qualify one as gifted by every IQ-based definition I know of. It is also (roughly) Mensa’s entrance requirement (depending on the test) though Mensa does not dictate the legal or psychologist’s definitions of giftedness. They are a club, not a developmental psychology authority.
As I mentioned previously, and judging from the graphs, the standard deviations of the IQs are obviously mixed up, because they were not determined in the questionnaire, and probably people who answered are not educated about them either. Including IQs in s.d. 24 with those in s.d. 16 and 15 is bound to inflate the average IQ. The top scores in that graph, or at the very least some of them, are in s.d. 24, which means that they would be a lot lower in s.d. 15. IQ 132 is the cutoff for s.d. 16, while s.d. 15 is the one most adopted in recent scientific literature. For s.d. 24, it is 148. Mensa and often people on the press like to use s.d. 24 to sound more impressive to amateurs.
This probably makes tests like the SAT more reliable as an estimation, because they have the same standard for all who submitted their scores, although in this case the ceiling effect would become apparent, because perfect or nearly-perfect scores wouldn’t go upwards of a certain IQ.
Ooh, you bring up good points. These are a source of noise, for sure.
Now I’m wondering if there are any clever ways to compensate for any of these and remove that noise from the survey…
Error bars, please!
The summary data:
2009: n=67, 145.88(14.02)
2011: n=331; 140.10(13.07)
2012: n=346; 138.30(12.58); graphed:
The basic formula for a confidence interval of a population is:
mean ± (z-score of confidence × (standard deviation / √n))
. So for z-score=95%=1.96:Or to run the usual t-tests and look at the confidence interval they calculate for the difference; for 2009 & 2012, the 95% CI for the difference in mean IQ is 3.563-10.578:
To add a linear model (for those unfamiliar, see my HPMoR examples) which will really just recapitulate the simple averages calculation:
Note that Epiphany dates the 2009 survey to around March, while the other two surveys happened around November, so inputting the survey dates just as years lowballs the time gap between the first & second surveys. Your linear trend’ll be a bit exaggerated.
I’ve fixed it as appropriate.
Before, the slope per year was −2.24 (minus 2.25 points a year), now the slope spits out as −0.00519 but if I’m understanding my changes right, the unit has switched from per year to per day and 365.25 times −0.005 IQ points per day is −1.896 per year.
2.25 vs 1.9 is fairly different.
I was lazy and ignored all non-numerical IQ comments, so I got slightly different numbers. But my 95% confidence intervals are:
145.18±3.27 in 2009
140.12±1.41 in 2011
138.42±1.33 in 2012
This comment is relevant; we have a dataset of users who both took the Raven’s test and self-reported IQ. The means of the group that did both was rather close to the means of the group that did each separately, but the correlation between the tests was low at .2. If you looked just at responders with positive karma, the correlation increased to a more respectable .45; if you looked just responders without positive karma, the correlation was -.11. This was a small fraction of responders as a whole, and the average IQ is already tremendously inflated by nonresponse. (If we assumed that, on average, people who didn’t self-report an IQ were IQ 100, then the LW average would be only 112!)
How well calibrated were the prediction book users?
Unfortunately we lacked a question to track prediction book users.
Hopefully then someone will do a supplementary calibration test for prediction book users in the comments here or in a new post on the discussion board. (Apologies for not doing it myself)
http://predictionbook.com/predictions displays an overall graph.
As for the IQ question and especially the self-reported IQ, it did not take into account that IQ should come at least with standard deviation. Otherwise it’s like asking for a height number without saying if it is in centimeters, meters, or feet. It’s understandable that people who didn’t study psychometrics with some depth don’t know this, though.
IQ can be a ratio IQ or a deviation IQ. In the first case it is mental age divided by actual age, with the normalcy as 100. This is still used mostly for children, but it’s still possible to see such scores. Deviation IQ is more common and it’s supposed to measure one’s intelligence according to rarity in a population.
Sometimes these tests are standardized for certain countries, in which case an IQ score only has relevance in relation to that country’s population, but generally the standard is the population of England or the USA, with its average being 100. Other countries have averages ranging from about 67 to 107 (s.d. 15), compared to it. The average IQ score of the world is estimated at about 90, but there are also differences in standard deviation among different populations, some have bigger variation than others, and also between the sexes (men have a slightly higher standard deviation).
Standard deviations used are 15, 16, and 24. For instance, an IQ score one standard deviation above 100 could be 115, 116, or 124. An IQ of 163 in s.d. 15 corresponds to an IQ of 167 in s.d. 16, or an IQ of 200 in s.d. 24, which, in average, correspond to a ratio IQ of 185. When estimating the true world rarity of IQ scores, though, very lengthy and complex estimations would need to be made, otherwise the scores only reflect the rarity in England or in the USA, and not in the world. When it comes to scores higher than two or three standard deviations above the average, most IQ tests are inadequate and insufficiently standardized to measure them and their rarity well.
This information is for your curiosity. The relevant point is that the self-reported IQ scores quite possibly were stated in differing standard deviations.
I think you missed some duplicates in
for_public.csv
: Rows 26, 30, 761 and 847 are identical to their preceding one.What? They calibrated the test using the people who took it online?
I’m fairly sure the Big Five wasn’t calibrated on an online sample, but I have no idea about iqtest.dk.
Not a survey response but too good to omit:
http://www.onislam.net/english/ask-about-islam/islam-and-the-world/worldview/460333-fiction-depiction-allegory-.html
I wouldn’t necessarily read too much into your calibration question, given that it’s just one question, and there was something of a gotcha.
One thing I learned from doing calibration exercises is that I tended to be much too tentative with my 50% guesses.
When I answered the calibration question, I used my knowledge of other math that either had to, or couldn’t have come before him, to narrow the possible window of his birth down to about 200 years. Random chance would then give me about a 20% shot. I thought I had somewhat better information than random chance within that window so I estimated my guess (IIRC) at 30%. I was, alas wrong, but I’m pretty confident that I would get around 30% of problems with a similar profile correct. If this problem was tricky, then it is more likely than average to be a problem that people get wrong in a large set. But this will be balanced by problems which are straightforward.
Not to suggest that this result isn’t evidence of LW’s miscalibration. In fact, it’s strong enough evidence for me to throw into serious doubt the last survey’s finding that we were better calibrated than a normal population. OTOH neither bit of evidence is terribly strong. A set of 5-10 different problems would make for much stronger evidence one way or the other.
Now I wish I had written a funnier description...so many of these are silly~
Alternate Explanations for LW’s Calibration Atrociousness:
Maybe a lot of the untrained people simply looked up the answer to the question. If you did not rule that out with your study methods, then consider seeing whether a suspiciously large number of them entered the exact right year?
Maybe LWers were suffering from something slightly different from the overconfidence bias you’re hoping to detect: difficulty admitting that they have no idea when Thomas Bayes was born because they feel they should really know that.
The mean was 1768, the median 1780, and the mode 1800. Only 169 of 1006 people who answered the question got an answer within 20 years of 1701. Moreover, the three people that admitted to looking it up (and therefore didn’t give a calibration) all gave incorrect answers: 1750, 1759, and 1850. So it seems like your first explanation can’t be right.
After trying a bunch of modifications to the data, it seems like the best explanation is that the poor calibration happened because people didn’t think about the error margin carefully enough. If we change the error margin to 80 years instead of 20, then the responses seem to look roughly like the untrained example from the graph in Yvain’s analysis.
Another observation is that after we drop the 45 people who gave confidence levels >85% (and in fact, 89% of them were right), the remaining data is absolutely abysmal: the remaining answers are essentially uncorrelated with the confidence levels.
This suggests that there were a few pretty knowledgeable people who got the answer right and that was that. Everyone else just guessed and didn’t know how to calibrate; this may correspond to your second explanation.
Another thing I have noticed is that I tend to pigeonhole stuff into centuries; for example, once in a TV quiz there was a question “which of these pairs of people could have met” (i.e. their lives overlapped), I immediately thought “It can’t be Picasso and van Gogh: Picasso lived in the 20th century, whereas van Gogh lived in the 19th century.” I was wrong. Picasso was born in 1881 and van Gogh died in 1890. If other people also have this bias, this can help explain why so many more people answered 17xx than 16xx, thereby causing the median answer to be much later than the correct answer.
I hate the nth century convention because it doesn’t match up with the numbers used for the dates, so I always refer to the dates.… but that actually tends to confuse people.
I was going to say “the 1700s”, but that’s ambiguous as in principle it could refer either to a century or to its first decade. (OTOH, it would be more accurate, as my mental pigeonholes lump the year 1700 together with the following 99 years, not with the previous.)
Good points, Kindly, thank you. New alternate explanation idea:
When these people encounter this question, they’re slogging through this huge survey. They’re not doing an IQ test. This is more casual. They’re being asked stuff like “How many partners do you have?” By the time they get down to that question, they’re probably in a casual answering mode, and they’re probably a little tired and looking for an efficient way to finish. When they see the Bayes question, they’re probably not thinking “This question is so important! They’re going to be gauging LessWrong’s rationality progress with it! I had better really think about this!” They’re probably like “Output answer, next question.”
If we really want to test them, we need to make it clear that we’re testing them. And if we want them to be serious about it, we have to make it clear that it’s important. I hypothesize that if we were to do a test (not a survey) and explain that it’s serious because we’re gauging LessWrong’s progress, and also make it short so that the person can focus a lot of attention onto each question, we’d see less atrocious results.
In hindsight, I wonder why I didn’t think about the effects of context before. Yvain didn’t seem to either; he thought something might be wrong with the question. This seems like one of those things that is right in front of our faces but is hard to see.
I think that people may be rationing their mental stamina, and may not be going through all the steps it takes to answer this type of question.
Uh, what? The point of LessWrong is to make people better all the time, not just better when they think “ah, now it’s time to turn on my rationality skills.” If people aren’t applying those skills when they don’t know they’re being tested, that’s a very serious problem, because it means the skills aren’t actually ingrained on the deep and fundamental level that we want.
You know that, Katydee, but do all the people who are taking the survey think that way? The majority of them haven’t even finished the sequences. I agree with you that it’s ideal for us to be good rationalists all the time, but mental stamina is a big factor.
Being rational takes more energy than being irrational. You have to put thought into it. Some people have a lot of mental energy. To refer to something less vague and more scientific: there are different levels of intelligence and different levels of intellectual supersensitivity (A term from Dabrowski that refers to how excitable certain aspects of your nervous system are.) Long story short: Some people cannot analyze constantly because it’s too difficult for them to do so. They run out of juice. Perhaps you are one of those rare people who has such high stamina for analysis that you rarely run into your limit. If that’s the case, it probably seems strange to you that anybody wouldn’t attempt to maintain a state of constant analysis. Most people with unusual intellectual stamina seem to view others as lazy when they observe that those other people aren’t doing intellectual things all the time. It frequently does not occur to them to consider that there may be an intellectual difference. The sad truth is that most people have much lower limits on how much intellectual activity they can do in a day than “constant”. If you want to see evidence of this, you can look at Ford’s studies where he shows that 40 hours a week is the optimum number of hours for his employees to work. Presumably, they were just doing factory work assembling car parts, which (if it fits the stereotype of factory work being repetitive) was probably pretty low on the scale for what’s intellectually demanding, but he found that if they tried to work 60 hours for two weeks in a row, their output would dip below the amount he’d normally get from 40 hours. This is because of mistakes. You’d think that the average human brain could do repetitive tasks constantly but evidently, even that tires the brain.
So in reality, the vast majority of people are not capable of the kind of constant meta-cognitive analysis that is required to be rational all the time. You use the word “ingrained” and I have seen Eliezer talk about how patterns of behavior can become habits (I assume he means that the thoughts are cached) and I think this kind of habit / ingrained response works beautifully when no decision-making is required and you can simply do the same thing that you usually do. But whenever one is trying to figure something out (like for instance working out the answers to questions on a survey) they’re going to need to put additional brainpower into that.
I had an experience where, due to unexpected circumstances, I developed some vitamin deficiencies. I would run out of mental energy very quickly if I tried to think much. I had, perhaps, a half an hour of analysis available to me in a day. This is very unusual for me because I’m used to having a brain that loves analysis and seems to want to do it constantly (I hadn’t tested the actual number of hours for which I was able to analyze, but I would feel bored if I wasn’t doing something like psychoanalysis or problem-solving for the majority of the day). When I was deficient, I began to ration my brainpower. That sounds terrible, but that is what I did. I needed to protect my ability to analyze to make sure I had enough left over to be able to do all the tasks I needed to do each day. I could feel that slipping away while I was working on problems and I could observe what happened to me after I fatigued my brain. (Vegetable like state.)
As I used my brainpower rationing strategies, it dawned on me that others ration brainpower, too. I see it all the time. Suddenly, I understood what they were doing. I understood why they kept telling me things like “You think too much!” They needed to change the subject so they wouldn’t become mentally fatigued. :/
Even if the average IQ at LessWrong is in the gifted range, that doesn’t give everyone the exact same abilities, and doesn’t mean that everyone has the stamina to analyze constantly. Human abilities vary wildly from person to person. Everyone has a limit when it comes to how much thinking they can do in a day. I have no way of knowing exactly what LessWrong’s average limit is, but I would not be surprised if most of them use strategies for rationing brainpower and have to do things like prioritize answering survey questions lower on their list of things to “give it their all” on, especially when there are a lot of them, and they’re getting tired.
Fascinating!
It’s making me realize why my summer project, which was to read Eat That Frog by Brian Tracey, was such a failure. The book is meant to be applied to work, preferably in an office environment–i.e. during your 40 productive work-hours. I was already working 40 hours a week at my extremely stimulating job as a nurse’s aid at the hospital, where I had barely any time to sit down and think about anything, and I certainly didn’t have procrastination problems. Then I would get home, exhausted with my brain about to explode from all the new interesting stuff I’d been seeing and doing all day, and try to apply Brian Tracey’s productivity methods to the personal interest projects I was doing in my spare time.
This was a very efficient way to make these things not fun, make me feel guilty about being a procrastinator, etc. It gave me an aversion to starting projects, because the part of my brain that likes and needs to do something easy and fun after work knew it would be roped into doing something mentally tiring, and that it would be made to feel guilty over not wanting to do it.
I’m hoping that once I’m graduated and work as a nurse for a year or two, so that I have a chance to get accustomed to a given unit and don’t have to spend so much mental effort, I’ll have more left over for outside interests and can start reading about physics and programming for fun again. (Used to be able to do this in first and second year, definitely can’t now.)
I’m glad you seem to have benefited from my explanation. If you want to do mentally draining reading, maybe weekends or later on in the evenings after you’ve rested would be a good time for that? If you’ve rested first, you might be able to scrape up a little extra juice.
Of course everyone has their own mental stamina limit, so nobody can tell you whether you do or don’t have enough stamina to do additional intellectual activities after work. And it may vary day to day, as work is not likely to demand the exact same amount of brainpower every day.
An interesting experiment would be to see if there’s anything that restores your stamina like a bath, a 20 minute nap after work, meditation, watching TV, or playing a fun game. Simply laying down in a dark quiet place does wonders for me if I am stressed out or fatigued. I would love to see someone log their mental stamina over time and correlate that to different activities that might restore stamina.
There are also stress reduction techniques that may help prevent you from losing stamina in the first place that could be interesting to experiment with.
And if you’re not taking 15 minute breaks every 90 minutes during work, you might be “over-training” your brain. Over-training might result in an amplification of fatigue. “The Power of Full Engagement: Manage Energy Not Time” is likely to be of interest.
If you decide to do mental stamina experiments, definitely let me know!
I hadn’t actually thought of that before...but it’s an awesome idea! I will let you know if I get around to it.
Woo-hoo! (:
I’ve also found that pouring lots of cold water on my face helps me squeeze out the last drops of stamina I have left, and allow me to work twenty more minutes or so. (It doesn’t actually restore stamina, so it doesn’t work if I do that more than a couple times in a row.)
Hmmm. That might be one or a combination of the following:
Taking a five minute break.
Enjoying physical sensation. (Enjoyment seems to restore stamina for me, perhaps that’s because the brain uses neurotransmitters for processing, and triggering pleasure involves increasing the amount of certain neurotransmitters.)
Fifteen minute breaks are supposed to be optima, and if you maximized pleasure during your break, I wonder what amount of stamina that would restore?
Probably 2. -- the break actually lasts about one minute.
Re the problem of having to think all the time: a good start is to develop a habit of rejecting certainty about judgments and beliefs that you haven’t examined sufficiently (that is, if your intuition shouts at you that something is quite clear, but you haven’t thought about that for a few minutes, ignore (and mark as a potential bug) that intuition unless you understand a reliable reason to not ignore it in that case). If you don’t have the stamina or incentives to examine such beliefs/judgments in more detail, that’s all right, as long as you remain correspondingly uncertain, and realize that the decisions you make might be suboptimal for that reason (which should suitably adjust your incentives for thinking harder, depending on the importance of the decisions).
The process of choosing a probability is not quite that simple. You’re not just making a boolean decision about whether you know enough to know, you’re actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)… at least that’s the way that I do it. (More efficient than making enough distinctions to choose between 100 different options.) When you are wondering exactly how likely you are to know something in order to choose a percentage, that’s when you have to start analyzing things. In order to answer the question, my thought process looked like this:
Bayes. I have to remember who that is. Okay, that’s the guy that came up with Bayesian probability. (This was instant, but that doesn’t mean it took zero mental work.)
Do I have his birthday in here? Nothing comes to mind.
Digs further: Do I have any reason to have read about his birthday at any point? No. Do I remember seeing a page about him? I can’t remember anything I read about his birthday.
Considers whether I should just go “I don’t know” and put a random year with a 0% probability. Decides that this would be copping out and I should try to actually figure this out.
When was Bayesian probability invented? Let’s see… at what point in history would that have occurred?
Try to brainstorm events that may have required Bayesian probability, or that would have suggested it didn’t exist yet.
Try to remember the time periods for when these events happened.
Defines a vague section of time in history.
Considers whether there might be some method of double-checking it.
Considers the meaning of “within 20 years either way” and what that means for the probability that I’m right.
Figures out where in my vague section of time the 40 year range should be fit.
Figures out which year is in the middle of the 40 year range and types it in.
Consider how many years Bayes would likely have to have lived for before giving his theorems to the world and adjust the year to that.
Considers whether it was at all possible for Bayesian probability to have existed before or after each event.
If possible, consider how likely it was that Baye’s probability existed before/after each event.
Calculate how many 40-year ranges there are in the vague section of time between the events where Bayes could not have been born.
Calculate the chance that I chose the correct 40-year section out of all the possible sections, if odds are equal.
Compare this to my probabilities regarding how likely it was for Bayes theorem to have existed before and after certain events.
Adjust my probability figure to take all that into account.
My answer to this question took at least twenty steps, and that doesn’t even count all the steps I went through for each event, nor does it count all the sub steps I went through for things that I sort of hand-waved like “Adjust my probability figure to take all that into account”.
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out. I highly recommend doing meditation to improve your meta-cognition. Meta-cognition is awesome.
The straightforward interpretation of your words evaluates as a falsity, as you can’t estimate informal beliefs to within 1%.
I’d put it more in terms of decibels of log-odds than percentages of probability. Telling 98% from 99% (i.e. +17 dB from +20 dB) sounds easier to me than telling 50% from 56% (i.e. 0 dB from +1 dB).
Well, you can, but it would be a waste of time.
No, I’m pretty certain you can’t. You can’t even formulate truth conditions for correctness of such an evaluation. Only in very special circumstances getting to that point would be plausible (when a conclusion is mostly determined by data that is received in an explicit form or if you work with a formalizable specification of a situation, as in probability theory problems; this is not what I meant by “informal beliefs”).
(I was commenting on a skill/habit that might be useful in the situations where you don’t/can’t make the effort of explicitly reasoning about things. Don’t fight the hypothetical.)
Is it your position that there is a thinking skill that is actually accurate for figuring stuff out without thinking about it?
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an “The solution is S!” for some promptly confabulated arbitrary S, when quantifying uncertainty isn’t even on the agenda).
(I feel the language of there being “positions” has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it’s clear what you mean.)
The point is to make these things automatic so that one doesn’t have to analyze all the time. I definitely don’t feel like I “maintain a state of constant analysis,” even when applying purportedly advanced rationality techniques. It basically feels the same as thinking about things normally, except that I am right more often.
I don’t believe that your claim is true, but if it is I think LessWrong is doomed as a concept. I frankly do not think people will be able to accurately evaluate when they need to apply thinking skills to their decisions, so if we cannot teach skills on this level—teach habits, as you say—I do not think LessWrong will ever accomplish anything of real worth.
One example of a skill that I have taken on on this level is reference class forecasting. If I need to estimate how long something will take, my go-to method is to take the outside view. I am so used to this that it is now the automatic response to questions of estimating times.
I don’t use “brainpower rationing” because I frankly have never felt the need to do so. I have told people that they “think too much” under certain circumstances (most notably when thinking is impeding action), and the thought of “brainpower rationing” has never come to mind until I saw this post.
What do you make of this?
Maybe I misinterpreted here but it sounds like you’re saying you don’t believe in mental stamina limits? Maybe you mean that you don’t think rationality requires much brainpower?
I don’t think we’d be doomed, and there are a few reasons for that:
There are people in existence who really can analyze pretty much constantly. THOSE people would theoretically have a pretty good chance of being rational all the time.
People who cannot analyze anywhere near constantly can simply choose their battles. If they’re aware of their mental stamina limits, they can work with them. Realizing you don’t know stuff and that you don’t have enough mental stamina to figure it out right now is kind of sad but it is still perfectly rational, so perhaps rationalists with low mental stamina can still be good rationalists that way.
There are things that decrease mental fatigue. For instance, taking 15 minute breaks every 90 minutes (The book “The power of full engagement: manage energy not time” talks about this). We could do experiments on ourselves to find out what other things reduce or prevent mental fatigue. There may be low-hanging apples we’re totally unaware of.
Okay, so you’ve learned to instantly go to a certain method. I can believe that this does not take much brainpower. However, how much brainpower does it take to execute the outside view method, on average, for the types of things you use it for? How many times can you execute the outside view in a day? Have you ever tried to reach your mental stamina limit?
Do you ever get home from work and feel relieved that you can relax now, and then do something that’s not mentally taxing? Do you ever find that you’re starting to hate an activity, and notice you’re making more and more mistakes? Do you ever feel lazy and can’t be bothered to do anything useful? I bet you do experience mental fatigue but don’t recognize it as such. A lot of people just berate themselves for being unproductive, and don’t consciously recognize that they’ve hit a real limit.
My method of doing the same calculation was:
see name
remember “mid-1700s”
therefore he must have been born in the early 1700s.
20 year margin means I don’t have to be precise.
answer: 1700 or 1705 (can’t remember which I put)
get question right
The more difficult part was the probability estimate. But using the heuristics taught to me by this book, this took only a few calculations. And the more I do these types of calculations, the faster and more calibrated I become. Eventually I hope to make them automatic at the 8 + 4 = 12 level.
If I were doing the calculation “for real” and not on a survey my algorithm would be much easier:
see name
copy/paste name into Google
look at Wikipedia
look at other sources to confirm (if important)
I know they exist on some level thanks to my experience with dual n-back, but I’ve yet to encounter any practical situation that imposes them (aside from “getting tired, which is different), and if I did I’m sure I could train my way out, just as I trained my way out of certain physical stamina limits. For example, it was once hard for me to maintain my endurance throughout a full fencing bout, but following some training I can do several in a row without becoming seriously fatigued. I’m sure better fencers than me can do even more.
LessWrong and CFAR, in my view, should provide the mental equivalent of that training if it is indeed necessary for the practice of rationality. I’m not, however, convinced that it is.
Immeasurably small (no perceived effort and takes less time than the alternative)/indeterminate/not in this respect. Most of the effort was involved in correctly identifying situations in which the method was useful, not in actually executing the method, but once the method became sufficiently ingrained that too went away.
No. My work is generally fun.
Not really. Sometimes I get bored, does that count?
Negative.
I think mental stamina is an important concept.
I’ll add mental exuberance (not an ideally clear word, but I don’t have a better one)-- how much people feel an impulse to think.
Nancy, there is already a term for this. It’s “intellectual overexcitability” or “intellectual supersensitivity”. These are terms from Dabrowski. Look up the “Theory of Positive Disintegration” to learn more.
Those terms seem like pathologizing—which is not surprising, considering that Dabrowski puts emphasis on the difficulties of the path. I was thinking more of the idea that some people like thinking more than others, just as some people like moving around more than others, which is something much less intense.
I was wondering whether Dabrowski was influenced by Gurdjieff, and it turns out that he was.
Thanks for the details. If I remember correctly, I was running out of the ability to care by the time I got to the Bayes question.
What were the vitamin deficiencies?
I’m not sure I can reliably recognize what mental fatigue feels like. I’d like to be able to diagnose it in myself (because I suspect that I have less mental energy than I used to), so do you know of any reasonably quick way to induce something that feels like mental fatigue, e.g. alcohol?
Alcohol doesn’t induce mental fatigue in me; high temperatures and dehydration do. YMMV.
EDIT: So does not eating enough sugars.
Whatever your worst subject is, do a whole bunch of exercises in it until you start making so many mistakes it is not worth continuing. No need for alcohol, might as well wear out your brain.
It would be interesting to see if you’d get different types of fatigue from doing different kinds of activities. For instance, if I do three hours of math problems, I have trouble speaking after that—it’s like my symbol manipulation circuitry is fried. (I have dyslexia, so that’s probably related.) If I wear out my verbal processor (something that I think only started happening to me after I developed some unexpected vitamin deficiencies) this results in irritation. I can’t explain myself very well, so people jump on me for mistakes, and it’s really hard to tell them what I meant instead, so I get frustrated.
So, exercising each area of mental abilities might yeild different fatigue symptoms.
If you decide to experiment on yourself I’m definitely curious about your results!
That happens to me, too.
What are your fatigue symptoms? How much can you do of each activity before becoming fatigued?
If I’ve been reading/studying too long, I find much harder to concentrate and am more easily distracted by stray thoughts.
If I’ve been writing computer code/doing maths too long, I make the kind of trivial mistakes that screw up the results but are hard to locate way more often.
It depends—usually between 20 minutes and 3 hours.
My first thought about this is that people’s rationality ‘in real life’ totally is determined by how likely they are to notice a Bayes question in an informal setting, where they may be tired and feeling mentally lazy. In Keith Stanovich’s terms, rationality is mostly about the reflective mind: it’s someone’s capacity and habits to re-compute a problem’s answer, using the algorithmic mind, rather than accept the intuitive default answer that their autonomous mind spits out.
IQ tests tend to be formal; it’s very obvious that you’re being tested. They don’t measure rationality in the sense that most LWers mean it; the ability to apply thinking techniques to real life in order to do better.
It might still be valuable to know how LWers do on a more formal test of probability-related knowledge; after all, most people in the general public don’t know Bayes’ theorem, so it’d be neat to see how good LW is at increasing “rationality literacy”. But that’s not the ultimate goal. There are reasons why you might want to measure a group’s ability to pick out unexpected rationality-related problems and activate the correct mindware. If your Bayesian superpowers only activate when you’re being formally tested, they’re not all that useful as superpowers.
I can see why you’d criticize someone for saying “the problem is that the setting wasn’t formal enough” but that’s not exactly what I was getting at. What I was getting at is that there’s a limit to how much thinking that one can do in a day, everyone’s limit is different, and a lot of people do things to ration their brainpower so they avoid running out of it. This comment on mental stamina explains more.
My point was, more clearly worded: It would be a very rare person who possesses enough mental stamina to be rational in literally every single situation. That’s a wonderful ideal, but the reality is that most people are going to ration brainpower. If your expectation is that rationalists should never ration brainpower and should be rational constantly, this is an unrealistic expectation. A more realistic expectation is that people should identify the things they need to think extra hard about, and correctly use rational thinking skills at those times. Therefore, testing for the skills when they’re trying is probably the only way to detect a difference. There are inevitably going to be times when they’re not trying very hard, and if you catch them at one of those times, well, you’re not going to see rational thinking skills. It may be that some of these things can be ingrained in ways that don’t use up a person’s mental stamina, but to expect that rationality can be learned in such a way that it is applied constantly strikes me as an unreasoned assumption.
Now I wonder if the entire difference between the control groups results and LessWrong’s results was that Yvain asked the control group only one question, whereas LessWrong had answered 14 pages of questions prior to that.
Agreed that rationality is mentally tiring...I went back and read your comment, too. However:
To me, rationality is mostly the ability to notice that “whew, this is a problem that wasn’t in the problem-set of the ancestral environment, therefore my intuitions probably won’t be useful and I need to think”. The only way a rationalist would have to be analytical all the time is if they were very BAD at doing this, and had to assume that every situation and problem required intense thought. Most situations don’t. In order to be an efficient rationalist, you have to be able to notice which situations do.
Any question on a written test isn’t a great measure of real-life rationality performance, but there are plenty of situations in everyday life when people have to make decisions based on some unknown quantities, and would benefit from being able to calibrate exactly how much they do know. Some people might answer better on the written test than if faced with a similar problem in real life, but I think it’s unlikely that anyone would do worse on the test than in real life.
Re having to think all the time: a good start is to develop a habit of rejecting certainty about judgments and beliefs that you haven’t examined sufficiently (that is, if your intuition shouts at you that something is quite clear, but you haven’t thought about that for a few minutes, ignore that intuition unless you have a reliable reason to not ignore it in that case). If you don’t have stamina or incentives to examine such beliefs/judgments in more detail, that’s all right, as long as you remain correspondingly uncertain, and realize that the decisions you make might be suboptimal for that reason (which should suitably adjust your incentives for thinking harder, depending on the importance of the decisions).
I don’t think you could really apply any ‘algorithmic’ method to that question (other than looking it up, but that would be cheating). It was a test on how much confidence you put in your heuristics. (BTW, It seems that I’ve underestimated mine, or I’ve been lucky, since I’ve got the date off by one year but estimated my confidence at 50% IIRC). Still, it was a valuable test, since most of human reasoning is necessarily heuristic.
Really? What probability do you assign to that statement being true? :D
I’m under the impression that Bayes’ theorem is included in the high school math programs of most developed countries, and I’m certain it is included in any science and engineering college program.
I assign about 80% probability to less than 25% of adults knowing Bayes theorem and how to use it. I took physics and calculus and other such advanced courses in high school, and graduated never having heard of Bayes’ Theorem. I didn’t learn about it in university, either–granted, I was in ‘Statistics for Nursing’, it’s possible that the ‘Statistics for Engineering’ syllabus included it.
Only 80%?
In the USA, about 30% of adults have a bachelor’s degree or higher, and about 44% of those have done a degree where I can slightly conceive that they might possibly meet Bayes’ theorem (those in the science & engineering and science- & engineering-related categories (includes economics), p. 3), i.e. as a very loose bound 13% of US adults may have met Bayes’ theorem.
Even bumping the 30% up to the 56% who have “some college” and using the 44% for a estimate of the true ratio of possible-Bayes’-knowledge, that’s only just 25% of the US adult population.
(I’ve no idea how this extends to the rest of the world, the US data was easiest to find.)
You did your research and earned your confidence level. I didn’t look anything up, just based an estimate on anecdotal evidence (the fact that I didn’t learn it in school despite taking lots of sciences). Knowing what you just told me, I would update my confidence level a little–I’m probably 90% sure that less than 25% of adults know Bayes Theorem. (I should clarify that=adults living in the US, Canada, Britain, and other countries with similar school systems. The percentage for the whole world is likely significantly lower.)
I hear Britain’s school system is much better than the US’s.
Once you control for demographics, the US public school system actually performs relatively well.
Good point.
It’s not great by international standards, but I have heard that the US system is particularly bad for an advanced country.
In terms of outcomes, the US does pretty terribly when considered 1 country, but when split into several countries it appears at the top of each class. Really, the EU is cheating by considering itself multiple countries.
The EU arguably is more heterogeneous than the US. But then, India is even more so.
How’s it being split?
I actually thought someone would dig up and provide the relevant link by now. I’ll have to find it.
You mean comparing poorer states to poorer countries?
Actually it is quite good (even for an “advanced country”) if you compare the test scores of, say, Swedes and Swedish-Americans rather than Swedes and Americans as a whole.
I wonder what that’s controlling for? Cultural tendencies to have different levels of work ethic?
Hmmm. So it’s “good” but people with the wrong genes are spoiling the average somehow.
The UK high school system does not cover Bayes Theorem.
If you choose maths as one of your A-levels, there’s a good chance you will cover stats 1 which includes the formula for Bayes’ Theorem and how to apply it to calculate medical test false positives/false negatives (and equivalent problems). However it isn’t named and the significance to science/rationality is not explained, so it’s just seen as “one more formula to learn”.
Offhand, 1⁄2 young people do A levels, 1⁄4 of those do maths, and 2⁄3 of those do stats, giving us 1⁄12 of young people. I don’t think any of these numbers are off by enough to push the fraction over 25%
Maybe you guys could solve that problem by publishing some results demonstrating its exteme significance
As far as I know, it’s been formally demonstrated to be the absolutely mathematically-optimal method of achieving maximal hypothesis accuracy in an environment with obscured, limited or unreliable information.
That’s basically saying: “There is no possible way to do better than this using mathematics, and as far as we know there doesn’t yet exist anything more powerful than mathematics.”
What more could you want? A theorem proving that any optimal decision theory must necessarily use Bayesian updating? ETA: It has been pointed out that there already exists such a theorem. I could’ve found that out by looking it up. Oops.
There already is such a theorem. From Wikipedia:
As far as I can tell from wikipedia’s description of admissibility, it makes the same assumptions as CDT: That the outcome depends only on your action and the state of the environment, and not on any other properties of your algorithm. This assumption fails in multi-player games.
So your quote actually means: If you’re going to use CDT then Bayes is the optimal way to derive your probabilities.
And the list of notable problems that have been solved using Bayes is...? Bayes doesn’t tell you how to make your informaton more copious or accuate, although there are plenty of techniques for doing that. Bayes also doesn’t tell you how to formulate novel hypotheses. It also doens’t tell you how to deal with conceptual problems that are not yet suitable for nnumber crucnhing. It looks to me like Bayes is actually a rather small part of the picture.
ETA;
A similar point is cogently argiued by RichardKennaway here
PS: -T-w-o- Three downvotes, and not a shred of counteargument. Typical.
Half of statistics these days is Bayesian. Do you want to defend the claim that statistics solves no notable problems?
As usual, I add my downvote to whining about downvotes. Since you think it’s ‘typical’ and this vindicates your claims, I’m sure you’ll be pleased that I’m helping prove you right.
Great. Then the UK education sytem is exactly right in teaching Bayes as part of statitistics, but not as a general-prupose solution to everything. ETA: But surely the LW take on Bayes is that it is much more than something useful in statistics.
No, I want to defend the claims that Bayes is not as a general-prupose solution to everything, is not a substitute for other congnitive disciplines, is of no benefit to many people and is of no use in many contexts.
Please inform me of the correct way to indicate that the karma system is being misused.
Now you’re just backing off your claim. What happened to your list?
First point: if Bayesian statistics is half of statistics, the description of the UK course is of it as being way way less than half the course. Therefore the UK system is very far from being ‘exactly right’.
Second point: The optimistic meta-induction is that Bayesian statistics has gone from being used by a literal handful of statisticians to being widespread and possibly a majority now or in the near future; therefore, it will continue spreading and eating more of statistics in general, and the course will get wronger and wronger, and your claims less and less right.
So you’re just splashing around a lot of bullshit and distractions when you demand lists and talk about the UK course being exactly right, since those aren’t what you are actually trying to claim. Good to know!
What’s the point of indicating when it’s not being misused?
I am not going to give a full response, because your comments are obstreporous, but See RichardKennaway’s discussion for LW’s oeverarching hopes for Bayes, and its limitations
You have your opinion, on that, I have mine. You can state your opinion, I can’t state mine. I can’t discuss the censorship, because discussions of censorship are censored.
You’re stating it right now. Oh the ironing.
It’s in a downvoted thread.So it isn’t visible.If negative karma doesn’t do anything regarding the visibility of comments, why have the button? Sheesh.
And so begins the equivocation on ‘people have to click a button to see it’ with ‘censorship’.
And so I ask you a second time: what is the button for?
And so begins another goal-shifting, like the list or like the claim of ‘censorship’, this time to defining karma systems. Pardon me if I don’t care to continue this game.
OK. You cannot give an answer that will not embarass yourself. Got that.
Must be a problem of the American school system, I suppose.
Did they teach you about conditional probability? Usually Bayes’ theorem is introduced right after the definition of conditional probability.
There are national and international surveys of quantitative literacy in adults. The U.S. does reasonably well in these, but in general the level of knowledge is appalling to math teachers. See this pdf (page 118 of the pdf, the in-text page number is “Section III, 93”) for the quantitative literacy questions, and the percentage of the general population attaining each level of skill. less than a fifth of the population can handle basic arithmetic operations to perform tasks like this:
People who haven’t learned and retained basic arithmetic are not going to have a grasp of Bayes’ theorem.
It was in my high school curriculum (in Italy, in the mid-2000s), but the teacher spent probably only 5 minutes on it, so I would be surprised if a nontrivial number of my classmates who haven’t also heard of it somewhere else remember it from there. IIRC it was also briefly mentioned in the part about probability and statistics of my “introduction to physics” course in my first year of university, but that’s it. I wouldn’t be surprised if more than 50% of physics graduates remember hardly anything about it other than its name.
I’m pretty sure Ireland doesn’t have it on our curriculum, not sure how typical we are.
Well, it’s certainly not included in the US high school curriculum.
If you’ll excuse the expression, I’m suspicious of your sudden epiphany. That is, I accept your suggestion as a possible explanation (although I’m not convinced, mainly because this doesn’t describe the way I answered the question; I don’t know about anyone else). But I think saying “Oh gosh! The true answer has been staring us in the face all along!” is premature.
I am not sure why you took “a new explanation” so seriously. I guess I have to be really careful on LessWrong to distinguish ideas from actual beliefs. I do not think it’s “The True Answer”. I just think it’s a rather obvious alternate explanation that should have occurred to me immediately, and didn’t, and I’m surprised about that, and about the fact that it didn’t seem to occur to Yvain either. I reworded some things to make it more obvious that I am not trying to present this as “The True Answer” but just as an idea.
Thank you, I appreciate that.
Would you mind trying to avoid jumping to the conclusion that I’m acting stupid in the future, Kindly? I definitely don’t mind being told “Your statement could be interpreted as such-and-such stupid behavior, so you may want to change it.” but it’s a little frustrating when people speak to me as if they really believe I am as confused as your “The True Answer” interpretation would imply.
I’m not sure why you’re accusing me of this. I often disagree with people, but I usually don’t assume the people I disagree with are stupid. This is especially true when we disagree due to a misunderstanding.
(I don’t intend to continue this line of conversation.)
Well it’s my perception that it would be pretty stupid to jump to the conclusion that I had it all figured out just out of nowhere up there. If that’s not your perception, too, then it’s not—but that would be unexpected to me who holds the perception that it would be a kind of stupid thing to do. I don’t know what wording to use other than “Please try not to jump to the conclusion that I’m doing stupid things.” but just substitute “stupid” for whatever word you would use, and then please try not to jump to conclusions that I am doing whatever it is that you call that, okay?
Any results for the calibration IQ?
The original question:
Well, the predictions spread the usual range and look OK to me:
It could be that many people self-reported IQ based off of their SAT or ACT scores, which would explain away the correlation. How many people reported both SAT and ACT scores?
If many people used the same formula to convert their SAT score to an IQ score, I expected the line would jump out, but I don’t see anything like that on the scatterplot.
IQs are often multiples of 5. I think that is the result of IQ tests that do not aim for precision and that conversion charts would end in any digit. 56% of survey IQs are multiples of 5. For those reporting both IQ and SAT, the number is 59%, so it is not depressed (or inflated) by those doing conversions. If we remove the multiples of 5, the correlation drops to .2 and stops being statistically significant. But both scatterplots look pretty similar.
You mean either of the SATs?
About 25% of cis women who answered the question are vegetarian, compared to about 12.5% of cis men. This is much less extreme than among people I’ve met in person (only 2 men that I remember of, vs at least 10 women).
How many of us are there:
A couple of months ago, I asked Trike, the company that manages the website, for a complete list of LessWrong registration dates in order to make a growth chart. I received it on 08-23-2012. The data shows that LessWrong has 13,727 total users, not including spammers and accounts that were deleted.
See also: LessWrong Growth Bar Graph (in the thread “Preventing discussion from being watered down by an “endless September” user influx.”)
Problem:
The line: “This includes all types with greater than 10 people. You can see the full table here.” links to a gif that is inaccurate, has no key to explain oddities, and is of such poor graphical quality that parts of it are actually unreadable.
It may be that the reason that invalid personality types like “INNJ” are listed is due to typos on the part of the survey participants. If so, then great! But it may also be that the person who constructed this graphic put typos in (I consider this fairly likely due to the fact that the graphical quality is so low that some of it’s not readable. For instance, the number of INTPs is so unclear I can’t even tell what it says—it looks like 113 but your results in the post claim 143). It isn’t obvious why the invalid types are there, so a key or note would be nice.
Also, some of the participants had a good idea: if one of your personality dimension letters changes when taking the test multiple times, you can fill it out with an X. Can we add an instruction for them to do this on the next survey?
The graphic was automatically generated by a computer program, so there’s no chance that typos were introduced. There’s no key to explain oddities because I have no way of knowing the explanation any better than you. When in doubt, blame survey takers being trolls.
But I do apologize for the poor graphic quality.
I don’t take this test all too often (in fact, didn’t take the one in the survey IIRC), but if we can do this, here’s my personality type: IXXX. Oh wait.
(Yes, seriously, if I take an online MBTI test several times at evenly spaced time intervals within the same month, the first varies between .6 and .95 towards I, and the others just jump around in a manner I can’t predict (yet, anyway, probably could eventually if I did more timewasting internet-test-taking))
I predict similar (perhaps less pronounced?) variation would be present in around 30% of LWers (not too confident in this number), and that we could reduce the variation dramatically by eliminating confused questions and tabooing ambiguous or vague words / phrases, replacing them with multiple questions containing various common meanings, and an even greater (bitwise) reduction by giving more contextual information from which the respondent can infer or judge values and weight variables on “It depends, but I suppose most of the time I would...” -type answers. (much more confident in these last two predictions than the first)
Well, possibly. The t-distribution is used for “estimating the mean of a normally distributed population,” (yay wikipedia) and you’re trying to estimate the mean of a slanted-uniformly-distributed-with-a-spike-at-the-beginning population.
But there is another important consideration, which is that applying more scrutiny to unexpected results gives you systematic error (confirmation bias), and that’s bad. To avoid this big problem, any increase in test quality should probably be part of a wholesale reanalysis, i.e. prolly not gonna happen. But there is another route, which is just accepting that your results are imperfect and widening your mental error bars. After all, where does this systematic error come from when you re-analyze unexpected results? It comes from you making mistakes on other things too, but not re-analyzing them! So once you know about the systematic error, you also know about all these other mistakes you have on average made :P
Yeah, it’d have to be some combination of a uniform Poisson (since we don’t seem to be growing a lot, per Yvain) and an exponential distribution (constant mortality of users). If we graph histograms, either blunt or finegrained, it looks like that but also with weird huge spikes besides the original OB->LW spike:
But on the plus side, if we look at the genders as a box plot, we discover why the mean is lower for women but there’s not significance:
There are, after all, many fewer women.
The spikes are just due to people estimating in half-years: 12, 18, 24, 30, 36.
Well-educated atheist American white men in their mid 20s with no children who work with computers.
“The new thing for people who would have been Randian Objectivists 30 years ago.”
The demographics are essentially the same except LW is probably more than 2:1 politically left vs. right. Objectivists are probably more than 2:1 in the other direction.
Since when did people like us decide it is OK to be liberal/socialist?
I think there is a significant correlation between Objectivism/hardcore libertarianism and the described demographics, but it does not mean that all or even most people of that demographic have that ideology; it just means that this demographic is much more likely to have this ideology that a random person is.
Also, while it is true that there are more LWers that are atheist than theist, male than female, white than other races, etc, it is at the same time very unlikely that most LWers have all those characteristics. (Being typical in all respects is very atypical). And just having one of those characteristics different might make the correlation with Objectivism/hardcore libertarianism reduce a lot.
Given that we are 86.2% male cisgender, 84.3% caucasion (non-hispanic), and 83.3% atheist (spiritual or not) that means a minimum of 53% of LWers are all three; probably the actual number is over 60%.
In answer to the parent, atheism in America may have started becoming a more liberal pursuit somewhere around 30 years ago when the Republican party started being substantially more religious and dismissive of atheism and science.
Out of the 1067 people who made their responses public, 694 are all three, which is 65%.
Magfrump, how did you manage to guess that it would be over the product? I wouldn’t have thought they would be positively correlated.
53% isn’t the product; the product is 60.5%. 53% is 100%-13.8%-15.7%-16.7%=53.8%, i.e. every person who deviates from typical does so in only one way. magfrump did correctly guess that it was more likely that they were positively correlated than not, which is what I would have guessed as well. In the general population, being male is correlated with being atheist, and I haven’t looked at the race-atheism correlations lately, but I wouldn’t be surprised if being non-hispanic white is correlated with atheism.
[edit] Er, if by “the product” you were referring to the part where he says it’s likely to be over 60%, then ignore this comment.
good thing I read the last sentence first, yeah I was :p
Nice job breaking it, Nixon.
I would actually have said that Nixon was the last Republican president that wasn’t actively hostile to science and atheism. Compared with Reagan and Bush, he certainly has a very different reputation.
EDIT: I would have said that without having looked at the link or having been alive at the time.
How is the Southern Strategy related to atheism? It seems to have been an appeal to ethnocentrism. What are the religiosity stats for Southern vs. Northern states in that period?
My first response is that human societies are like ecosystems in that it’s difficult to only do one thing to them; moreover, to understand this it’s probably better to see the “appeal to ethnocentrism” as more of a strategic meme intended to shift the voting behaviors of an entire group. The Republicans simply wanted to expand the “vote Republican” behavior, identified a likely population, and then found something easy, near and dear to get them motivated: States Rights (at the time used as a justification for segregation). Former Dixiecrats flock to the Republican Party in droves and change its makeup—where it used to be a party of the Northern cities, it’s now becoming the party of rural, Southern conservatives.
This is the late 60s. The culture wars are already in swing. And the Republicans are now fast gaining ground among a population that is largely rural-to-suburban, quite ethnically-homogenous and xenophobic, for whom the church is the center for much of civil as well as religious society,
Martin Luther King has just been assassinated. There are riots among African-American communities over this. The Black Power movement comes to prominence, changing the flavour of much of the civil rights movement’s dialogue (King terrified people enough with his civil disobedience). Demonstrations against the Vietnam War are a regular occurance and shown on nightly news programs—very often the flag is burned. The hippie culture is spreadiing drugs and free love.
The new Republican contingent is quite amenable when Nixon starts talking about a side dish of Law And Order next to States Rights. The seeming paradox simply does not matter here. As the Republican party demographics and representation shift over the next decade, federalism in areas relating to the culture war becomes a key part of voting trends and and policies within this bloc. Explicitly religious and ethno-nationalist rhetoric become critical parts of communication within this group. The Creation Science movement comes to prominence, not as a quirky little subcultural thing barely anyone thinks about much but as a platform of the more extreme segments of the religious right. Republican atheists find themselves part of a deeply-divided base. They probably don’t like it much, but the Southern Strategy seems to be very successful, at both state and federal levels, so they get lots of the attention at a party level.
It takes a while for this all to shake out. Some of the atheists jump ship; others are in traditional Republican strongholds where this is not such a point of tension but find themselves increasingly-outnumbered as the party grows into a new population and thus, less-prioritized by the Party in general.
This new Republican movement is not only successful in the South—pretty much anyplace where demographics match up somewhat closely is a new Republican hotspot waiting to happen. The Lower Midwest, the Rockies and the Southwest all see inroads of this new brand as well.
The Republican Party is becoming more hostile to atheists. The Democrats are staying steady or slowly becoming more receptive, but critically, their new Civil Rights platform means that they’re growing steadily in urban areas in the North and the upper East Coast, as well as California. These areas are better breeding grounds for atheists—high population densities, higher income, higher rates of education, more diverse in many ways and sometimes more tolerant of that diversity, sometimes even in matters of civil law.
Not available to me offhand, but I’m not necessarily describing a net shift in religiosity, but rather a redistribution of certain forms of it within the population, as indexed to Party affiliation.
Not sure, but they’ve generally been consistently higher for a long time. That’s where we get the term Bible Belt which has been around since the late 1920s. Moreover, the Lost Cause was early on heavily connected to religion.
Empirically, Nixon was ok on science issues. For example, Nixon was essentially responsible for both the founding of NOAA and the EPA. If one wants to hang this on a specific Republican President, Reagan seems easier, given that he made multiple deeply uninformed comments about science.
The “etc.” was meant to cover the other listed demographic characteristics: young age, American, childless… If you add all those, the actual number of LWers satisfying all of them should drop below ~40%.
Adding in American, no children, works with computers, and less than 30 drops it to 10%; education is a bit tougher to code (ideally, you would want to index it by age). Most of the work is being done by “American,” “less than 30,” and “works with computers.” (No children, conditioned on being less than 30, does almost nothing.)
While this is usually true, I suspect that these measures will not all be independent (i.e. there will be some people who are outsiders to Less Wrong in many ways) and that combined with the fact that there are huge supermajorities (rather than simply majorities) of these populations will mean that most LWers are in fact highly typical.
Either way this doesn’t address the core claim of whether these characteristics select for Libertarianism. I don’t have the time or experience with spreadsheets to figure this out immediately but we could certainly look at that as an empirical question. Looking back my earlier reply was really outside the spirit of your first post so I apologize for that.
By the way, has anyone figured out how to load the CSV in R?
read.table
chokes with problems like “Error in read.table(“for_public.csv”, header = TRUE, quote = “”, sep = ”,”) : more columns than column names”;dos2unix
doesn’t help, and loading it up in OpenOffice, it looks fine but re-exporting as CSV leads to the same darn errors. Mucking about, I can get it to load if I delete everything but the first entry and then the last 4 commas but this solution doesn’t work for any additional entries!This worked “dat ← read.csv(‘http://raikoth.net/Stuff/LessWrong/for_public.csv’)”
That works, thanks.
Another R issue. How do I convert the scores from IQTest into real numbers? as.numeric seems to do strange things.
use “as.numeric(as.character(dat$IQTest))”
The IQtest data is stored as factor. A factor variable has a set of levels, numbered 1,2,3,… that are the variable can possibly take on and labels for those factors. as.numeric(X) returns the level numbers of X. as.character returns the labels of X. In the case that the labels are actually numbers (usually integers that R is interpreting as character labels for some reason), as.numeric(as.character(X)) will return the numeric values that R is interpreting as labels.
EDIT:
In this case, when no value for IQtest was reported, it was stored as ” ” instead of “”, which made R think the variable contained character data which R defaults to treating as factors. The ” “’s should all be NA’s once it’s converted properly.
“iqt ← as.numeric(dat$IQTest)” The already numeric IQ is in dat$IQ, iqt is only the suspect, online IQtest.
A serious design flaw in S/R means that your code fails badly and silently. The man page suggests that you try as.numeric(factor(5:10)).
Matt Simpson’s code is correct. The man page suggests slightly faster as.numeric(levels(dat$IQTest))[dat$IQTest]. That example also shows why they chose the the coercion that they did.
If you really did quote=”“, then you don’t have any quote character and it won’t work. But that’s probably some kind of markdown error. The default in read.table is to allow both double and single quotes, while the default in read.csv is to only allow single quotes; I find that if I change your argument to quote=”\”” to allow only double quotes, then it reads it with no errors. Another difference between read.table and read.csv is that read.table defaults to allowing # comments, which mangles one of the lines. This can be fixed with comment.char=”″, at which point read.csv and read.table produce the same result.
(Nearly deleting this post due to my idiocy. I can’t read column titles correctly.)
SAT1600 takers: 8% theist
SAT2400 takers: 0% theist
Income-wise, there seem to be some anomalies and I’m not sure what factors we ought to expect to be driving them or what is privileging a hypothesis. So here’s a poll asking your opinion on which of the correlates we should expect to be most important in governing LWers’ income (given in alphabetical order):
[pollid:260]
Sweet. I was in the one correctly calibrated cohort—I knew just how slim my chances of being right were!
On the reason for the absence of multiple comparison correction in my various quick tests here: http://lesswrong.com/lw/h56/the_universal_medical_journal_article_error/8q28
A question arose on
#lesswrong
as to whether female LWers might be more likely to find LW through MoR than not. There is an imbalance in MoR referrals by gender, but it’s not sufficiently extreme to hit significance in the limited survey dataset (need moar women):Doesn’t need to hit an arbitrary (if historically established) 0.05 to be significant. 0.1048 still means a (EDIT:) higher probability that you’ve found something than not.
(Thanks for the correction.)
That is not what p-values mean.
Another analysis: t-test/logistic regression does not indicate a relationship between getting the first CFAR logic puzzle right and having answered more survey questions than those who got the logic question wrong. (Tim Tyler suggested that there might be a commitment/rushing effect.)
10 people said “Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year” over “Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year” on CFAR question #4...
I said “Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year” personally. I think there’s a case for B, maybe, but who picks C?
Edit:nevermind
Just wanted to point out a few fallacies in the above:
“can solve the Schrodinger Equation” means nothing or less without specifying the problem you are solving. The two simplest problems taught in a modern physics course, the free particle and a one-dimensional infinite square well are hardly comparable with, say, calculating the MRI parameters.
self-reporting “can solve the Schrodinger Equation” does not mean one actually can.
even then, “can solve the Schrodinger Equation” does not mean “understand quantum mechanics”, as it does not require one to understand measurement and decoherence, which is what motivates MWI in the first place.
there are many versions of MWI, from literal (“the Universe split into two or more every time something happens”) to Platonic (“Mathematical Universe”).
Basically, I hope that you realize that this is a prime example of “garbage in, garbage out”. I suppose it’s a good thing that there was no correlation, otherwise one might draw some unwarranted conclusions from this.
If the correlation had come out the other way, you’d be jumping on it as proof of your thesis that LWers favor MWI because they are sheepishly following Eliezer. In what universe where they are indeed sheepishly and ignorantly following him does a question like that show nothing whatsoever?
Probably (though not a proof, just one piece of evidence). I suspect that “garbage in” is the reason why we don’t see it, but I do not have a convincing argument either way, short of asking Eliezer to post an insincere message “I no longer believe in MWI”, take the survey soon after, then have him retract the retraction. This would, however, be rather damaging to his credibility in general.
I’m assuming that the question was meant as a simple and polite proxy for “Does your knowledge of quantum mechanics include some actual mathematical content, or is it just taken from popular science books and articles?”
Probably. The reason he mentioned the Schrodinger equation was likely an attempt to quantify it. I am arguing that the threshold is set too low to be useful.
The actual survey specified “can solve the Schrodinger equation for a hydrogen atom”. Although it is not exactly synonymous with “understands quantum mechanics”, you would expect them to be highly correlated.
Right, sorry, I forgot that qualifier since the time I took the survey. It does imply more familiarity with the underlying math than the simplest possible cases. Still, I recall that when I was at that level, I was untroubled by the foundational issues, just being happy to have mastered the math.
I wonder if there is a way to test this assertion. One would presumably start by defining what “understands quantum mechanics” means.
When I was learning to solve the hydrogen atom, they didn’t even talk about the foundational issues, just waved it off with some wave-particle duality nonsense. But still, it seems like as good a criterion as you’re going to get, unless you want to ask if people have a Master in Physics (Quantum).
I suppose that a better question would be related to the EPR paradox, but I’m not sure what academic course would cover it.
The question was specifically about the SE for a hydrogen atom. But I agree that having good PDE-fu isn’t necessarily a good proxy for anything else.
I suspect asking about density matrices might be a better test.
I gave you a tentative upvote because this comment sounds very plausible, but since I don’t know how to solve any version of the Schrodinger Equation, I’m going by more general priors.
Sure. It is very reasonable to put some trust (but probably not too much) in what EY says about MWI if your experience shows that he is not out to lunch in the areas of your expertise. Assuming that is what you mean by “more general priors”.
That wasn’t at all what I had in mind, though Eliezer’s generally high level of intelligence and meticulousness makes MWI seem a little more likely to me.
No,my strongest general priors in play are that it’s likely that there are different degrees of understanding the Schrodinger Equation, that people might kid themselves about how well they understand it, and that there’s more than one take on MWI. My prior for there’s more to understanding MWI than the Schrodinger equation is a little weaker, but not much.
The CSV seems to have different data than you reported. For example, it shows 63% theists and 18% atheists. It doesn’t appear to be a simple reversal of labels.
From those who believe that we are in a simulation with over 70% confidence there’s only one person who has a higher chance of God existing then the chance that we live in a simulation. Given that a God got here defined as someone with world making powers, how do you get a simulation without a God?
Look again at the survey questions:
A simulator is not a god because gods are ontologically basic, while simulators are not.
John lives in a simulation. He thinks about the properties of the simulator. The simulator can’t be reduced to the kind of physical objects that John can observe in his reality. The simulator is made of different stuff.
If it just about reducing the entity into multiple parts than that’s possible for the Chrisitan God who’s made up of three parts.
Otherwise can you point me to a good definition of ontologically basic?
— Richard Carrier
It’s not a matter of whether they are “made of different stuff” but if they are made of stuff at all. A simulator is no more supernatural to us than we are to a boxed AI; we’re both running inside the same material universe, just in different ways.
The boxed AI runs inside a universe that follow the laws of Turing computing.
Nature is the stuff around us. A simulation simulates nature. The one who runs the simulation isn’t part of that nature. The simulator can exist without needing anything from the nature in with John lives.
If I’m reading Harry Potter, Harry Potter lives in a world where magic happens. I don’t. Those two world are fundamentally different. The magic that happens in Harry Potter’s world is causally independent from myself.
According to Yvain’s definition, you are correct. On the other hand, I can’t think of something more godlike than creating or designing the universe, can you? It just seems like a very idiosyncratic definition, which is why I complained to Yvain about it last year.
Sure. For instance, some theists claim that God created and maintains logic itself.
Moreover, a simulator who is not ontologically basic (i.e. is made of matter; arose through material processes in their own universe) does not meet four of Aquinas’s “five ways” — unmoved mover, first cause, necessary being, or maximum degree of goodness.
Would someone who created a computer that created the universe count as a god? I can easily write computer games with more complex behavior than I feel capable of fully comprehending, but I would not consider that computer program an intelligent entity. I can imagine that someone more educated and with a higher mental capacity than I could similarly write a computer program that is capable of creating and maintaining in simulation a universe with the global constants and initial conditions necessary to produce intelligent life without the program actually qualifying as intelligent itself.
My personal belief is that if there is a “god”, he is quite probably much like a video game programmer, who can set up a universe like an MMO and let it run “infinitely” in “real-time”, but, being constrained to a similar time-scale as the “players”, is unable to make a large number of fine-grained adjustments to local variables at the immediate behest of said players (i.e. “answering prayers”). Someday we may get a version 2.0 release which allows third-party plugins so players can hack the universe to answer their own prayers, but I don’t place a high conditional probability on that happening within my projected lifetime.
Given that all the ‘players’ are running in the universe in question, being able to make a large number of fine-grained adjustments to local variables in an instant (in-universe time) is simple; simply pause the simulation.
...unless some of you out there are actually players from outside the universe, in which case the rest of us would appreciate a hint.
What, the myriad prophets of revealed religions and cults aren’t enough of a hint for you?
This seems to unreasonably deviate from the way almost every type of simulation I’ve ever heard of works. You can pause/resume, you can increase/decrease the “timesteps” to make the world go “faster” (with larger quanta levels though), or you could just arbitrarily increase the raw processing speed of the machine running the simulation to make the ratio of simulated vs external time proportionally higher.
Of course, if what you’re proposing instead is that our “minds” are actually outside the simulation and sending input into it, rather than being fully contained within the simulation, then yes, the real-time constraint does apply.
ETA: In the latter case, I would argue that the term “Virtual Reality” is more appropriate and the use of “simulation” here is misleading and prone to conflating or confusing the two scenarios.
Hell, if the mathematical universe hypothesis is correct, then somewhere out there in the universe there is, with no intelligent priors, a collection of particles in the form of a computer, simulating a universe containing intelligent entities.
And the universe it is simulating itself contains an entity which created a computer which simulates a universe...
Given that the mathematical universe hypothesis is correct, what are the odds that the universe we experience
A- is the mathematical universe
B- is a computer analogue simulation which was generated spontaneously
C- is a computer analogue simulation which was created by an intelligence but is currently unattended
D- is a computer analogue simulation which is attended by someone who wishes to suppress the thought that the universe is atten---
We’ve already had a discussion on whether it’s appropriate to regard belief in the simulation argument as theism, and the consensus was no.
The post is about whether the label theist is appropriate. This question is about whether you believe God exists. Those two questions aren’t the same.
6.69% of the people on lesswrong who think that they are “Atheist and not spiritual” believe that the chance that God exists is higher than 50%.
atheistNS ← subset(survey, survey$ReligiousViews==”Atheist and not spiritual”)
(length(subset(atheistNS, as.numeric(atheistNS$PGod)>50)$PGod)/length(atheistNS$PGod))
If you give people a definition of “god” which technically includes things outside the usual conception of god, they’re probably going to continue operating by the standard definitions of the term. Even if I believed we were living in a simulation, I wouldn’t believe in “god,” because it would be laden with so many misleading associations inapplicable to what I actually believed.
That the simulation controllers/creators aren’t necessarily omnibenevolent is one possible explanation for us being in a simulation and there not existing what most people call ‘god.’
Omnibenevolent was not in the criteria for this question. If people used it as a criteria, it suggests that they felt victim to some bias that let’s them underrate the possibility that a god exists.
Maybe it’s the cognitive dissonance, because a good rationalist shouldn’t believe in a god?
Right, just saw that, my bad.
I still don’t see how that follows. Rationality can show that certain potential gods very probably don’t exist (e.g. Thor), but I think that’s as far as it goes.
I don’t argue here it’s rational to believe that god doesn’t exist. I argue that there a tribal belief among rationalists that part of being a good rationalist mean to be an atheist or teapot agnostic.
Ratioanlists who hold that tribal belief might experience cognitive dissonce when they have to put a percentage on the chance that God exists.
Ah, that makes sense. Thank you.
LW is not exactly an easy community to get into.
The lack of a willingness to pursue both different ideas and expanded ideas prevents me from commenting. There’s too much of a focus on probability, with a false understanding of what constitutes a “point of evidence”, and a general inability to navigate a probability cloud with the understanding that all points inside are possible answers.
When I do comment, I get voted down even if my post is a good point. There seems to be no willingness to communicate to non-insiders. There also isn’t a system available which supports non-insiders, something which is necessary to make newcomers feel safe. (bold because it’s both important and easy to understand). I can’t get positive karma.
The comment system itself only supports a good 100-200 comments. It’s too daunting of a task to read every one, and it’s impossible to search for useful information. Most of the problem here is a programming problem which cannot be solved by anyone who doesn’t have direct access to their subconscious mind, which is the majority of LW.
I think, if you want to expand further, and attract more people, you have to expand your mind further: out into the probability field of the unknown. In the end, after you’ve figured out what’s right and what’s wrong, and are in the state of exploring probability fields, focusing on right vs wrong is detrimental to determining what’s actually right and what’s actually wrong. There is no right vs wrong in a probability field; not until all possibilities have been explored. “more right” and “more wrong” are deceiving impossibilities in a chaotic fractal algorithm.
Exploring the probability field is what people with access to their subconscious mind do. Most people don’t even have access to their conscious mind. If you want to uplift people, take into account that they can’t even think on purpose. (seriously. It’s not that it’s impossible for them, they just don’t know that it’s possible)
Of course, I’m one to talk. I need to figure out how to give people direct access to their karma, which is one big step beyond the subconscious mind. I, many-a-time, made the mistake of assuming people here have access to their subconscious.
Not systematically—you have some upvoted comments, some downvoted ones, and some at zero. That’s an informative set. You’re being told “more of this, less of that”. It may not be clear yet what distinguishes this from that.
Try figuring out the pattern. Take it as a puzzle.
I agree that the comment system could be much improved, but I don’t see where direct access to one’s subconscious fits into that.
Have you read the sequences?
I feel the urge to downvote you because I have no idea what you are talking about and you are tripping a lot of the usual quack-alarms. Usually, this is because you (generalized to refer to all people to which I have this reaction) don’t know what you’re talking about, and are not using our language.
As for why it’s important to use our language, consider that any idiot can talk like an outsider, and most of the people who can talk the talk around here are also saying intelligent things. As for the danger of excluding good ideas from outside, I’ve found that I can usually understand outsiders who have something valuable to say.
You are obviously intelligent, but if you want to contribute, you kindof have to read a lot of our background info. I’d like to make this easier, but right now, it’s a slog. Good luck.
Suggesting that a newbie read one million words (literally) before posting doesn’t sound that helpful to me. RationalWiki used to criticize LessWrong for that.
A better suggestion would be the About page, the FAQ and the Welcome thread.
The sequences actually are worth reading. More than the majority of comments on this site, I’d say. The original commenter actually mentioned trying to read every comment. In comparison to the entire comment content of the site, the sequences are not only far more concise but better quality and a much better time investment in general.
“We don’t need more clueless users making confused comments resulting in more responses that try and fail to summarize the sequences in 100 words.”
I’m not sure what to make of your post, honestly. There’s a lot of seemingly New-Age BS catchphrases/language, but it seems like you’re trying to say something.
Posts which may help you, for starters: What is Evidence?, 37 Ways Words Can be Wrong Do not attempt to read all the Sequences at once, there’s way too much material to do that.
Attracting people who enjoy spending time with internet friends and feeling superior to outgroups?
I think this is the most likely failure mode of less wrong and am unsure of why not much is being done to address it. The idea of CFAR is great; its success just isn’t tied to less wrong.
(Please don’t reply to or vote on this comment unless you have more than 200 predictions on predictionbook.)
Edit: Apparently people are okay with using a metric for agreement (upvoting) as a barrier for downvoting on LW, but not metrics for calibration (number of predictionbook predictions).
If it makes you feel better, one of the downvotes is mine and I have the most predictions of all on PredictionBook.
Downvoted for this. You shouldn’t try to stop people voting on your comments.