1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.
Example model: Imagine that a person must first avoid an error A, then an error B, until they reach the correct conclusion C. The chance of making the error A is 70% for average person, 50% for intelligent person; the chance of making the error B is 90% for average person, 80% for an intelligent person.
Results for average people: 70% A, 27% B, 3% C. Results for intelligent people: 50% A, 40% B, 10% C. Possible interpretation: B is the correct answer, because here the difference is largest: 13%. (C is obviously a small minority even among intelligent people, so we can explain it away e.g. by signalling.)
2) Intelligence can correlate with something, e.g. education, which may be a source of new errors. Not necessarily new kinds of biases, just new ways to apply the same old biases. For example the “quantum mysterious consciousness” explanations will be more popular among more educated people, less educated will instead use words “spirits” and “magic” to explain the same concept.
3) An intelligent person can easily confuse “opinions of me and my friends” with “opinions of intelligent people”. Because how do most intelligent-and-proud-of-it people judge the intelligence of others? In my experience, usually by similarity of opinions.
EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people? In other words, even if we trust the authors premises, should be trust his specific results too?
1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.
For what it’s worth (and as I’ve commented previously on that blog), in reading on heuristics & biases, I’ve encountered biases which inversely correlate minimally with intelligence like sunk cost, but I don’t believe I have seen any biases which correlated with increasing intelligence.
EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people?
How large is ‘large enough’? Think of political polling—how many samples do they need to extrapolate to the general population?
I don’t believe I have seen any biases which correlated with increasing intelligence.
My guess would be reversing stupidity, and searching for a difficult solution when a simple one exists. Both are related to signalling intelligence. On the other hand, I guess many intelligent people don’t self-diagnose as intelligent, so perhaps those biases would be only strong in Mensa and similar places.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
How large is ‘large enough’?
Depending on what certainty of answer is required. Before convincing people “you should believe X, because this is what smart people believe” I would like to be at least 95% certain, because this kind of argument is rather offensive towards opponents.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
Biases don’t have clear ‘directions’ often. If you are overconfident on a claim P, that’s just as accurate as saying you were underconfident on claim ~P. Similarly for anchoring or priming—if you anchor on the random number generator while estimating number of African nations, whether you look “over” or “under” is going to depend on whether the RNG was spitting out 1-50 or 100-200, perhaps.
I would like to be at least 95% certain
And what does that mean? If you just want to know ‘what do smart people in general believe versus normal people’, you don’t need large samples if you can get a random selection and your questions are each independent. For example, in my recent Wikipedia experiment I removed only 100 links and 3 were reverted; when I put that into a calculator for a Bernouilli distribution, I get 99% certainty that the true reversion rate is 0-7%. So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?
1) I think even in your example model, the answer chosen by the method would still be C, the correct conclusion, for, as the author says, “The percentage of smart and dull groups choosing each answer is compared and the largest ratio of the smart to dull percentages is the Smart Vote.”(emphasis added) As you see, it’s not the difference (a subtraction) that matters, but the ratio:
A = 50⁄70 ~ 0.71
B = 40⁄27 ~ 1.6
C = 10⁄3 ~ 3.3
Thus, C > B > A.
2) and 3) I don’t grok totally Regression Analysis yet (dropped out College, akrasia+depression won), but he emphasises in many comments that he controls for many variables such as income, education, to avoid non-cognitive motivations (I’m not sure this is the right term, but I hope you understand me) and only get the ‘smart’ decisions.
Regarding biases, he once gave an answer to User:gwern who once commented on his blog, that even though it is true that even bright people are prone to some biases as everyone else, still in these cases they’re a little bit less prone, so what counts is the ‘trend’ from the dumb to the smart. In his words:
“In cases like the sunk cost fallacy high IQ people usually make mistakes like everyone else. The point however is that on average they make them less often. Typically you see stuff like 70% of a smart group get it wrong but 90% of the dull group do. That trend, and not the % correct, is what points to the better answer.
I have a big collection of common logical mistakes people make (gleaned from Tversky & Kahneman mostly, but also many others). On all those I’ve tried so far—including the sunk cost fallacy—the brighter group does quite a bit better, even though most still get them wrong. Contrary to what you are saying here the Smart Vote does particularly well on such fallacies.”
Regarding the question posed in the EDIT, he seems to use the General Social Survey (GSS). According to Wikipedia ,
“The General Social Survey (GSS) is a sociological survey used to collect data on demographic characteristics and attitudes of residents of the United States. The survey is conducted face-to-face with an in-person interview by the National Opinion Research Center at the University of Chicago, of a randomly-selected sample of adults (18+) who are not institutionalized. The survey was conducted every year from 1972 to 1994 (except in 1979, 1981, and 1992). Since 1994, it has been conducted every other year. The survey takes about 90 minutes to administer. As of 2010 28 national samples with 55,087 respondents and 5,417 variables had been collected. The data collected about this survey includes both demographic information and respondent’s opinions on matters ranging from government spending to the state of race relations to the existence and nature of God.
Because of the wide range of topics covered, and the comprehensive gathering of demographic information, survey results allow social scientists to correlate demographic factors like age, race, gender, and urban/rural upbringing with beliefs, and thereby determine whether, for example, an average middle-aged black male respondent would be more or less likely to move to a different U.S. state for economic reasons than a similarly situated white female respondent; or whether a highly educated person with a rural upbringing is more likely to believe in a transcendent God than a person with an urban upbringing and only a high-school education.”
So, yes, I think it’s a fairly comprehensive and diverse sample.
Is there an example of “politically correct” beliefs? Such as “everything is learned, heredity is a myth”. I would suspect intelligent people more prone to this kind of beliefs, because they are associated with education and they require more complex explanation—both is opportunity to signal intelligence.
It seems most of his analyses are on political opinions, not on matters of fact. The one exception seems to be on the existence of God, where the smart vote was on agnosticism, which is not exactly “politically correct”, but would signal intelligence.
Now, some of the political positions are PC, such as support for Gay Rights, for Immigration, and opposition to Death Penalty. The position on welfare state seems very un-PC, though (“doesn’t think is really a state responsibility but is not opposed to some welfare spending so long as the country can afford it”). The total support for abortion doesn’t seem PC at all either, at least it isn’t in Brazil.
It is important to note that those people were answering a survey, so signalling isn’t that strong a factor as it would be if they were talking of their position to, say, their work colleagues.
Results for average people: 70% A, 27% B, 3% C. Results for intelligent people: 50% A, 40% B, 10% C. Possible interpretation: B is the correct answer, because here the difference is largest: 13%.
Wouldn’t it make more sense to use odds ratios than probability differences?
1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.
Example model: Imagine that a person must first avoid an error A, then an error B, until they reach the correct conclusion C. The chance of making the error A is 70% for average person, 50% for intelligent person; the chance of making the error B is 90% for average person, 80% for an intelligent person.
Results for average people: 70% A, 27% B, 3% C. Results for intelligent people: 50% A, 40% B, 10% C. Possible interpretation: B is the correct answer, because here the difference is largest: 13%. (C is obviously a small minority even among intelligent people, so we can explain it away e.g. by signalling.)
2) Intelligence can correlate with something, e.g. education, which may be a source of new errors. Not necessarily new kinds of biases, just new ways to apply the same old biases. For example the “quantum mysterious consciousness” explanations will be more popular among more educated people, less educated will instead use words “spirits” and “magic” to explain the same concept.
3) An intelligent person can easily confuse “opinions of me and my friends” with “opinions of intelligent people”. Because how do most intelligent-and-proud-of-it people judge the intelligence of others? In my experience, usually by similarity of opinions.
EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people? In other words, even if we trust the authors premises, should be trust his specific results too?
For what it’s worth (and as I’ve commented previously on that blog), in reading on heuristics & biases, I’ve encountered biases which inversely correlate minimally with intelligence like sunk cost, but I don’t believe I have seen any biases which correlated with increasing intelligence.
How large is ‘large enough’? Think of political polling—how many samples do they need to extrapolate to the general population?
My guess would be reversing stupidity, and searching for a difficult solution when a simple one exists. Both are related to signalling intelligence. On the other hand, I guess many intelligent people don’t self-diagnose as intelligent, so perhaps those biases would be only strong in Mensa and similar places.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
Depending on what certainty of answer is required. Before convincing people “you should believe X, because this is what smart people believe” I would like to be at least 95% certain, because this kind of argument is rather offensive towards opponents.
Biases don’t have clear ‘directions’ often. If you are overconfident on a claim P, that’s just as accurate as saying you were underconfident on claim ~P. Similarly for anchoring or priming—if you anchor on the random number generator while estimating number of African nations, whether you look “over” or “under” is going to depend on whether the RNG was spitting out 1-50 or 100-200, perhaps.
And what does that mean? If you just want to know ‘what do smart people in general believe versus normal people’, you don’t need large samples if you can get a random selection and your questions are each independent. For example, in my recent Wikipedia experiment I removed only 100 links and 3 were reverted; when I put that into a calculator for a Bernouilli distribution, I get 99% certainty that the true reversion rate is 0-7%. So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?
I am not good at statistics, but I guess yes. Especially if those 100 people are really randomly selected, which in the given situation they were.
Thank you for your interest in the matter.
1) I think even in your example model, the answer chosen by the method would still be C, the correct conclusion, for, as the author says, “The percentage of smart and dull groups choosing each answer is compared and the largest ratio of the smart to dull percentages is the Smart Vote.”(emphasis added) As you see, it’s not the difference (a subtraction) that matters, but the ratio: A = 50⁄70 ~ 0.71 B = 40⁄27 ~ 1.6 C = 10⁄3 ~ 3.3 Thus, C > B > A.
2) and 3) I don’t grok totally Regression Analysis yet (dropped out College, akrasia+depression won), but he emphasises in many comments that he controls for many variables such as income, education, to avoid non-cognitive motivations (I’m not sure this is the right term, but I hope you understand me) and only get the ‘smart’ decisions.
Regarding biases, he once gave an answer to User:gwern who once commented on his blog, that even though it is true that even bright people are prone to some biases as everyone else, still in these cases they’re a little bit less prone, so what counts is the ‘trend’ from the dumb to the smart. In his words:
Regarding the question posed in the EDIT, he seems to use the General Social Survey (GSS). According to Wikipedia ,
So, yes, I think it’s a fairly comprehensive and diverse sample.
Thanks, this seems fair.
Is there an example of “politically correct” beliefs? Such as “everything is learned, heredity is a myth”. I would suspect intelligent people more prone to this kind of beliefs, because they are associated with education and they require more complex explanation—both is opportunity to signal intelligence.
It seems most of his analyses are on political opinions, not on matters of fact. The one exception seems to be on the existence of God, where the smart vote was on agnosticism, which is not exactly “politically correct”, but would signal intelligence.
Now, some of the political positions are PC, such as support for Gay Rights, for Immigration, and opposition to Death Penalty. The position on welfare state seems very un-PC, though (“doesn’t think is really a state responsibility but is not opposed to some welfare spending so long as the country can afford it”). The total support for abortion doesn’t seem PC at all either, at least it isn’t in Brazil.
It is important to note that those people were answering a survey, so signalling isn’t that strong a factor as it would be if they were talking of their position to, say, their work colleagues.
Wouldn’t it make more sense to use odds ratios than probability differences?
Not only it makes more sense, but it is the approach adopted by Zietsman. Please check my answer below.