At a conference back in the early 1970s, Danny [Kahneman] was introduced to a prominent philosopher named Max Black and tried to explain to the great man his work with Amos [Tversky]. “I’m not interested in the psychology of stupid people,” said Black, and walked away.
Danny and Amos didn’t think of their work as the psychology of stupid people. Their very first experiments, dramatizing the weakness of people’s statistical intuitions, had been conducted on professional statisticians. For every simple problem that fooled undergraduates, Danny and Amos could come up with a more complicated version to fool professors. At least a few professors didn’t like the idea of that. “Give people a visual illusion and they say, ‘It’s only my eyes,’ ” said the Princeton psychologist Eldar Shafir. “Give them a linguistic illusion. They’re fooled, but they say, ‘No big deal.’ Then you give them one of Amos and Danny’s examples and they say, ‘Now you’re insulting me.’ ”
In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards [former teacher of Amos] wrote to complain. In what would be the first of many agitated letters, he adopted the tone of a wise and indulgent master speaking to his naïve pupils. How could Amos and Danny possibly believe that there was anything to learn from putting silly questions to undergraduates? “I think your data collection methods are such that I don’t take seriously a single ‘experimental’ finding you present,” wrote Edwards. These students they had turned into their lab rats were “careless and inattentive. And if they are confused and inattentive, they are much less likely to behave more like competent intuitive statisticians.” For every supposed limitation of the human mind Danny and Amos had uncovered, Edwards had an explanation. The gambler’s fallacy, for instance. If people thought that a coin, after landing on heads five times in a row, was more likely, on the sixth toss, to land on tails, it wasn’t because they misunderstood randomness. It was because “people get bored doing the same thing all the time.”
An Oxford philosopher named L. Jonathan Cohen raised a small philosophy-sized ruckus with a series of attacks in books and journals. He found alien the idea that you might learn something about the human mind by putting questions to people. He argued that because man had created the concept of rationality, he must, by definition, be rational. “Rational” was whatever most people did. Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, “Any error that attracts a sufficient number of votes is not an error at all.
He argued that because man had created the concept of rationality, he must, by definition, be rational.
Oh my.
Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, “Any error that attracts a sufficient number of votes is not an error at all.”
Wondering how many computation cycles humanity has wasted since the beginning of time debating words will give me nightmares. Have we in four thousands years of history accumulated a month of creative, uninterrupted thoughts about truth that wasn’t about definitions?
Heh. Humanity did a lot of useful work by observing things, and in recent centuries by applying math. Also, humans are traditionally good at making tools, because they require near-mode thinking. So we do have a few strengths. It’s just that understanding the difference between a map and the territory, in absense of constant experimental feedback, is not one of them.
I have met a few smart people who had a similar reaction to the whole “heuristics and biases” topic. They react as if the idea that human brain could be somehow imperfect is a personal offense aimed at to them, and immediately start composing verbal arguments about how biases are not “really” mistakes.
For example, people who are otherwise skeptical about evolution when it interferes with their religious beliefs, suddenly say things like “but, an irrational brain would be an evolutionary disadvantage, so it could never evolve!” (On a second thought, I guess the true reason of discomfort of these specific people could be that the idea of cognitive biases is not really compatible with an idea of an omniscient and omnibenevolent intelligent designer. I mean, intentionally designing an intelligent mind that systematically thinks incorrectly and cannot help itself, that sounds quite evil.)
From “A Bitter Ending”:
Oh my.
Wondering how many computation cycles humanity has wasted since the beginning of time debating words will give me nightmares. Have we in four thousands years of history accumulated a month of creative, uninterrupted thoughts about truth that wasn’t about definitions?
Heh. Humanity did a lot of useful work by observing things, and in recent centuries by applying math. Also, humans are traditionally good at making tools, because they require near-mode thinking. So we do have a few strengths. It’s just that understanding the difference between a map and the territory, in absense of constant experimental feedback, is not one of them.
I have met a few smart people who had a similar reaction to the whole “heuristics and biases” topic. They react as if the idea that human brain could be somehow imperfect is a personal offense aimed at to them, and immediately start composing verbal arguments about how biases are not “really” mistakes.
For example, people who are otherwise skeptical about evolution when it interferes with their religious beliefs, suddenly say things like “but, an irrational brain would be an evolutionary disadvantage, so it could never evolve!” (On a second thought, I guess the true reason of discomfort of these specific people could be that the idea of cognitive biases is not really compatible with an idea of an omniscient and omnibenevolent intelligent designer. I mean, intentionally designing an intelligent mind that systematically thinks incorrectly and cannot help itself, that sounds quite evil.)