1) More generally, what if more intelligent people are more resistant to some biases, but equally prone to other biases? Then in opinions of more intelligent people we would see less of the former biases, but perhaps more of the latter biases; and also more of the correct answers. The exact values would depend on exact numbers in models.
For what it’s worth (and as I’ve commented previously on that blog), in reading on heuristics & biases, I’ve encountered biases which inversely correlate minimally with intelligence like sunk cost, but I don’t believe I have seen any biases which correlated with increasing intelligence.
EDIT: Does author really give questionaires and IQ tests to large enough samples of randomly selected people?
How large is ‘large enough’? Think of political polling—how many samples do they need to extrapolate to the general population?
I don’t believe I have seen any biases which correlated with increasing intelligence.
My guess would be reversing stupidity, and searching for a difficult solution when a simple one exists. Both are related to signalling intelligence. On the other hand, I guess many intelligent people don’t self-diagnose as intelligent, so perhaps those biases would be only strong in Mensa and similar places.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
How large is ‘large enough’?
Depending on what certainty of answer is required. Before convincing people “you should believe X, because this is what smart people believe” I would like to be at least 95% certain, because this kind of argument is rather offensive towards opponents.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
Biases don’t have clear ‘directions’ often. If you are overconfident on a claim P, that’s just as accurate as saying you were underconfident on claim ~P. Similarly for anchoring or priming—if you anchor on the random number generator while estimating number of African nations, whether you look “over” or “under” is going to depend on whether the RNG was spitting out 1-50 or 100-200, perhaps.
I would like to be at least 95% certain
And what does that mean? If you just want to know ‘what do smart people in general believe versus normal people’, you don’t need large samples if you can get a random selection and your questions are each independent. For example, in my recent Wikipedia experiment I removed only 100 links and 3 were reverted; when I put that into a calculator for a Bernouilli distribution, I get 99% certainty that the true reversion rate is 0-7%. So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?
For what it’s worth (and as I’ve commented previously on that blog), in reading on heuristics & biases, I’ve encountered biases which inversely correlate minimally with intelligence like sunk cost, but I don’t believe I have seen any biases which correlated with increasing intelligence.
How large is ‘large enough’? Think of political polling—how many samples do they need to extrapolate to the general population?
My guess would be reversing stupidity, and searching for a difficult solution when a simple one exists. Both are related to signalling intelligence. On the other hand, I guess many intelligent people don’t self-diagnose as intelligent, so perhaps those biases would be only strong in Mensa and similar places.
But I was more thinking about one bias appearing stronger when a bias in another direction is eliminated. For example bias X makes people think A, bias Y makes people think B, if a person is under influence of both biases, the answer is randomly A or B. In such case, eliminating bias X leads to increase of answer B.
Depending on what certainty of answer is required. Before convincing people “you should believe X, because this is what smart people believe” I would like to be at least 95% certain, because this kind of argument is rather offensive towards opponents.
Biases don’t have clear ‘directions’ often. If you are overconfident on a claim P, that’s just as accurate as saying you were underconfident on claim ~P. Similarly for anchoring or priming—if you anchor on the random number generator while estimating number of African nations, whether you look “over” or “under” is going to depend on whether the RNG was spitting out 1-50 or 100-200, perhaps.
And what does that mean? If you just want to know ‘what do smart people in general believe versus normal people’, you don’t need large samples if you can get a random selection and your questions are each independent. For example, in my recent Wikipedia experiment I removed only 100 links and 3 were reverted; when I put that into a calculator for a Bernouilli distribution, I get 99% certainty that the true reversion rate is 0-7%. So to simplify considerably, if you sampled 100 smart people and 100 dumb people and they differ by 14%, is that enough certainty for you?
I am not good at statistics, but I guess yes. Especially if those 100 people are really randomly selected, which in the given situation they were.