I don’t recall any discussion on LW—and couldn’t find any with a quick search—about the “Great Rationality Debate”, which Stanovich summarizes as:
An important research tradition in the cognitive psychology of reasoning—called the heuristics
and biases approach—has firmly established that people’s responses often deviate from the
performance considered normative on many reasoning tasks. For example, people assess
probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they
violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject
their own opinions onto others, they display illogical framing effects, they uneconomically honor
sunk costs, they allow prior knowledge to become implicated in deductive reasoning, and they
display numerous other information processing biases (for summaries of the large literature, see
Baron, 1998, 2000; Dawes, 1998; Evans, 1989; Evans & Over, 1996; Kahneman & Tversky, 1972,
1984, 2000; Kahneman, Slovic, & Tversky, 1982; Nickerson, 1998; Shafir & Tversky, 1995;
Stanovich, 1999; Tversky, 1996).
It has been common for these empirical demonstrations of a gap between descriptive and
normative models of reasoning and decision making to be taken as indications that systematic
irrationalities characterize human cognition. However, over the last decade, an alternative
interpretation of these findings has been championed by various evolutionary psychologists,
adaptationist modelers, and ecological theorists (Anderson, 1990, 1991; Chater & Oaksford, 2000;
Cosmides & Tooby, 1992; 1994b, 1996; Gigerenzer, 1996a; Oaksford & Chater, 1998, 2001; Rode,
Cosmides, Hell, & Tooby, 1999; Todd & Gigerenzer, 2000). They have reinterpreted the modal
response in most of the classic heuristics and biases experiments as indicating an optimal
information processing adaptation on the part of the subjects. It is argued by these investigators that
the research in the heuristics and biases tradition has not demonstrated human irrationality at all and
that a Panglossian position (see Stanovich & West, 2000) which assumes perfect human rationality
is the proper default position to take.
Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How
evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the
psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and
Reasoning]
The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.
I don’t expect to find much support here for the Tooby/Cosmides position on the issue, but I’m surprised that there doesn’t seem to have been any discussion of the issue. Maybe I’ve missed discussions or posts though.
I don’t understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a “cut and paste” approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution—yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.
Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a “calculus of probability” but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.
Reference
Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8
I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don’t go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
I was hoping somebody could make a coherent and plausible sounding argument for their position.
I’m not sure I’m up to the challenge, but here goes anyway …
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities—i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus—not just heuristics—in their probabilistic reasoning. If humans never gave the output demanded by Bayes’ theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes’ theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T’s research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean “bad at statistical reasoning in general, in an abstract setting” or do you mean “bad at statistical reasoning in whatever form it might take”? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.
Typically, the “optimal thinking” argument gets brought up here in the context of evolutionary psychology. Loss aversion makes sound reproductive sense when you’re a hunter-gatherer, and performing a Bayesian update carefully doesn’t help all that much. But times have changed, and humans have not changed as much.
I don’t recall any discussion on LW—and couldn’t find any with a quick search—about the “Great Rationality Debate”, which Stanovich summarizes as:
Stanovich, K. E., & West, R. F. (2003). Evolutionary versus instrumental goals: How evolutionary psychology misconceives human rationality. In D. E. Over (Ed.), Evolution and the psychology of thinking: The debate, Psychological Press. [Series on Current Issues in Thinking and Reasoning]
The lack of discussion seems like a curious gap given the strong support to both the schools of thought that Cosmides/Tooby/etc. represent on the one hand, and Kahneman/Tversky/etc. on the other, and that they are in radical opposition on the question of the nature of human rationality and purported deviations from it, both of which are central subjects of this site.
I don’t expect to find much support here for the Tooby/Cosmides position on the issue, but I’m surprised that there doesn’t seem to have been any discussion of the issue. Maybe I’ve missed discussions or posts though.
I don’t understand the basis for the Cosmides and Tooby claim. In their first study, Cosmides and Tooby (1996) solved the difficult part of a Bayesian problem so that the solution could be found by a “cut and paste” approach. The second study was about the same with some unnecessary percentages deleted (they were not needed for the cut and paste solution—yet the authors were surprised when performance improved). Study 3 = Study 2. Study 4 has the respondents literally fill in the blanks of a diagram based on the numbers written in the question. 92% of the students answered that one correctly. Studies 5 & 6 returned the percentages and the students made many errors.
Instead of showing innate, perfect reasoning, the study tells me that students at Yale have trouble with Bayesian reasoning when the question is framed in terms of percentages. The easy versions do not seem to demonstrate the type of complex reasoning that is needed to see the problem and frame it without somebody framing it for you. Perhaps Cosmides and Tooby are correct when they show that there is some evidence that people use a “calculus of probability” but their study showed that people cannot frame the problems without overwhelming amounts of help from somebody who knows the correct answer.
Reference
Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58, 1–73, DOI: 10.1016/0010-0277(95)00664-8
I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don’t go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
I’m not sure I’m up to the challenge, but here goes anyway …
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities—i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus—not just heuristics—in their probabilistic reasoning. If humans never gave the output demanded by Bayes’ theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes’ theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T’s research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean “bad at statistical reasoning in general, in an abstract setting” or do you mean “bad at statistical reasoning in whatever form it might take”? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.
Typically, the “optimal thinking” argument gets brought up here in the context of evolutionary psychology. Loss aversion makes sound reproductive sense when you’re a hunter-gatherer, and performing a Bayesian update carefully doesn’t help all that much. But times have changed, and humans have not changed as much.