I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don’t go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
I was hoping somebody could make a coherent and plausible sounding argument for their position.
I’m not sure I’m up to the challenge, but here goes anyway …
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities—i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus—not just heuristics—in their probabilistic reasoning. If humans never gave the output demanded by Bayes’ theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes’ theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T’s research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean “bad at statistical reasoning in general, in an abstract setting” or do you mean “bad at statistical reasoning in whatever form it might take”? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.
I agree. I was hoping somebody could make a coherent and plausible sounding argument for their position, which seems ridiculous to me. The paper you referenced shows that if you present an extremely simple problem of probability and ask for the answer in terms of a frequency (and not as a single event), AND you present the data in terms of frequencies, AND you also help subjects to construct concrete, visual representations of the frequencies involved by essentially spoon-feeding them the answers with leading questions, THEN most of them will get the correct answer. From this they conclude that people are good intuitive statisticians after all, and they cast doubt on the entire heuristics and biases literature because experimenters like Kahneman and Tversky don’t go to equally absurd lengths to present every experimental problem in ways that would be most intuitive to our paleolithic ancestors. The implication seems to be that rationality cannot (or should not) mean anything other than what the human brain actually does, and the only valid questions and problems for testing rationality are those that would make sense to our ancestors in the EEA.
I’m not sure I’m up to the challenge, but here goes anyway …
I think you are being ungenerous to the position Tooby and Cosmides mean to defend. As I read them (see especially Section 22 of their paper), they are trying to do two things. First, they want to open up the question of how exactly people reason about probabilities—i.e., what mechanisms are at work, not just what answers people give. Second, they want to argue that humans are slightly more rational than Kahneman and Tversky give them credit for being.
First point. Tooby and Cosmides do not actually commit to the position that humans use a probability calculus in their probabilistic reasoning. What they do argue is that Kahneman and Tversky were too quick to dismiss the possibility that humans do use a probability calculus—not just heuristics—in their probabilistic reasoning. If humans never gave the output demanded by Bayes’ theorem, then K&T would have to be right. But T&C show that in more ecologically valid cases, (most) humans do give the output demanded by Bayes. So, the question is re-opened as to what brain mechanism takes frequency inputs and gives frequency outputs in accordance with Bayes’ theorem. That mechanism might or might not instantiate a rule in a calculus.
Second point. If you are tempted (by K&T’s research) to say that humans are just dreadfully bad at statistical reasoning, then maybe you should hold off for a second. The question is a little bit under-specified. Do you mean “bad at statistical reasoning in general, in an abstract setting” or do you mean “bad at statistical reasoning in whatever form it might take”? If the former, then T&C are going to agree. If you frame a statistics problem with percentages, you get all kinds of errors. But if you mean the latter, then T&C are going to say that humans do pretty well on problems that have a particular form, and not surprisingly, that form is more ecologically valid.
General rule of charity: If someone appears to be defending a claim that you think is obviously ridiculous, make sure they are actually defending what you think they are defending and not something else. Alternatively (or maybe additionally), look for the strongest way to state their claim, rather than the weakest way.