Well actually, a lot of their papers repeat the same points. Perhaps as a result of publish or perish, least publishable units, and all that. If you want to read just one article to save time, go with “Ending the Rationality Wars.”
For Pinker, I was mainly thinking of chapter 5 of his book How the Mind Works (an excellent book, I’m somewhat surprised few people around LW seem to have read it.) His discussion is based on lots and lots of other peoples’ work. Cosmides and Tooby show up a lot, and double checking my copy so does Gigerenzer, though Pinker says nothing about Gigerenzer’s “group approach” or anything like that. (Warning: Pinker shows frequentist sympathies regarding probability.)
Samuels et al.’s Ending the Rationality Wars is a good paper and I generally agree with it. Though Samuels et al. mostly show that the dispute between the two groups has been exaggerated, they do acknowledge that Gigerenzer’s frequentism leads him to have different normative standards for rationality than what Stein (1996) called the “Standard Picture” in cognitive science. LessWrong follows the Standard Picture. Moreover, some of the criticisms of Gigerenzer & company given here still stand.
I skimmed chapter 5 of How the Mind Works but didn’t see immediately the claims you might be referring to — ones that disagree with the Standard Picture and Less Wrong.
I don’t have access to Stein, so this may be a different issue entirely. But:
What I had in mind from Pinker was the sections “ecological rationality” (a term from Tooby and Cosmides that means “subject-specific intelligence”) and “a trivium.”
One key point is that general-purpose rules of reasoning tend to be designed for situations where we know very little. Following them mindlessly is often a stupid thing to do in situations where we know more. Unsurprisingly, specialized mental modules often beat general-purpose ones for the specific tasks their adapted to. That’s reason not to make too much of the fact that humans fail to follow the general-purpose rules.
And in fact, some “mistakes” are only mistakes in particular circumstances. Pinker gives the example of the “gambler’s fallacy,” which is only a fallacy when the probabilities of the events are independent, which outside of a casino they very often aren’t.
Pinker seems to be missing the same major point that Gigerenzer et al. continuously miss, a point made by those in the heuristics and biases tradition from the beginning (e.g. Baron 1985): the distinction between normative, descriptive, and prescriptive rationality. In a paper I’m developing, I explain:
Our view of normative rationality does not imply, however, that humans ought to explicitly use the laws of rational choice theory to make every decision. Neither humans nor machines have the knowledge and resources to do so (Van Rooij 2008; Wang 2011). Thus, in order to approximate normative rationality as best we can, we often (rationally) engage in a “bounded rationality” (Simon 1957) or “ecological rationality” (Gigerenzer and Todd 2012) that employs simple heuristics to imperfectly achieve our goals with the limited knowledge and resources at our disposal (Vul 2010; Vul et al. 2009; Kahneman and Frederick 2005). Thus, the best prescription for human reasoning is not necessarily to always use the normative model to govern one’s thinking (Stanovich 1999; Baron 1985).
In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.
What does mainstream academic prescriptive rationality look like? I get the sense that’s where Eliezer invented a lot of his own stuff, because “mainstream cogsci” hasn’t done much prescriptive work yet.
For Samuels and Stich: Everything.
Well actually, a lot of their papers repeat the same points. Perhaps as a result of publish or perish, least publishable units, and all that. If you want to read just one article to save time, go with “Ending the Rationality Wars.”
For Pinker, I was mainly thinking of chapter 5 of his book How the Mind Works (an excellent book, I’m somewhat surprised few people around LW seem to have read it.) His discussion is based on lots and lots of other peoples’ work. Cosmides and Tooby show up a lot, and double checking my copy so does Gigerenzer, though Pinker says nothing about Gigerenzer’s “group approach” or anything like that. (Warning: Pinker shows frequentist sympathies regarding probability.)
Samuels et al.’s Ending the Rationality Wars is a good paper and I generally agree with it. Though Samuels et al. mostly show that the dispute between the two groups has been exaggerated, they do acknowledge that Gigerenzer’s frequentism leads him to have different normative standards for rationality than what Stein (1996) called the “Standard Picture” in cognitive science. LessWrong follows the Standard Picture. Moreover, some of the criticisms of Gigerenzer & company given here still stand.
I skimmed chapter 5 of How the Mind Works but didn’t see immediately the claims you might be referring to — ones that disagree with the Standard Picture and Less Wrong.
I don’t have access to Stein, so this may be a different issue entirely. But:
What I had in mind from Pinker was the sections “ecological rationality” (a term from Tooby and Cosmides that means “subject-specific intelligence”) and “a trivium.”
One key point is that general-purpose rules of reasoning tend to be designed for situations where we know very little. Following them mindlessly is often a stupid thing to do in situations where we know more. Unsurprisingly, specialized mental modules often beat general-purpose ones for the specific tasks their adapted to. That’s reason not to make too much of the fact that humans fail to follow the general-purpose rules.
And in fact, some “mistakes” are only mistakes in particular circumstances. Pinker gives the example of the “gambler’s fallacy,” which is only a fallacy when the probabilities of the events are independent, which outside of a casino they very often aren’t.
Pinker seems to be missing the same major point that Gigerenzer et al. continuously miss, a point made by those in the heuristics and biases tradition from the beginning (e.g. Baron 1985): the distinction between normative, descriptive, and prescriptive rationality. In a paper I’m developing, I explain:
Or, here is Baron (2008):
What does mainstream academic prescriptive rationality look like? I get the sense that’s where Eliezer invented a lot of his own stuff, because “mainstream cogsci” hasn’t done much prescriptive work yet.
Examples: Larrick (2004); Lovallo & Sibony (2010).
This is helpful. Will look at Baron later.