4 If biases are so harmful, why don’t they get selected out?
If biases are so harmful, why don’t they get selected out?
We have good reason to believe that many biases are the results of cognitive shortcuts designed to speed up decisions making, but not in all cases. Mercier and Speaker’s Argumentative Theory of Rationality suggests that confirmation bias is an adaptation to arguing things out in groups: that’s why people adopt a single point of view, and stick to it in the face of almost all opposition. You don’t get good quality discussion from a bunch if people saying There are Arguments on Both Sides,
“Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments. M&S also plead for the “rehabilitation” of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.”
Societies have systems and structures in place for ameliorating and leveraging confirmation bias. For instance, replication and off crosschecking in science ameliorate the tendency of research groups succumb to bias. Adversarial legal processes and party politics leverage the tendency, in ordered get good arguments made for both sides of a question. Values such as speaking ones mind (as opposed to agreeing with leaders), offering and accepting criticism also support rationality.
Now, teaching rationality, in the sense of learning to personally overcome bias has a problem in that it may not be possible to do fully, and it has a further problem in that it may not be a good idea. Teaching someone to overcome confirmation bias in a sense , to see two or more sides to the story, is, in a sense, teaching them to internalise the process of argument, to be solo rationalists. And while society perhaps needs some people like these, it perhaps also doesn’t need many. Forms of solo rationality training have existed for a long time, eg philosophy, but they are do most suit a lot of people’s preferences, and not a lot of people can succeed at them, since they are cognitively difficult
If you plug solo ration
Ists into systems designed for the standard human, you are likely to get an impedance mismatch, not improved rationality. If you wanted to increase overall rationality by increasing average rationality, assuming that is feasible in the first place, you would have to redesign systems. But you could probably increase overall rationality by improving systems anyway...we live in a world where medicine, lf all things, isnt routinely based on good quality evidence
Some expansion of point 4 If biases are so harmful, why don’t they get selected out?
“During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability calculators,” they go on to claim that “when people are given information in a format that meshes with the way they naturally think about probability,”(Pinker 1997, 347, 351) the inevitable illusions turn out to be, to use Gerd Gigerenzer memorable term, “evitable” (Gigerenzer 1998). Indeed in many cases, evolutionary psychologists claim that the illusions simply “disappear” (Gigerenzer 1991).” http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/Wars/wars.html
There are four elephant in the room issues surrounding ratiinality.
1 [Rationality is more than one thing];
2 Biases are almost impossible to overcome;
3 Confirmation bias is adaptive to group discussion
4 If biases are so harmful, why don’t they get selected out?
If biases are so harmful, why don’t they get selected out?
We have good reason to believe that many biases are the results of cognitive shortcuts designed to speed up decisions making, but not in all cases. Mercier and Speaker’s Argumentative Theory of Rationality suggests that confirmation bias is an adaptation to arguing things out in groups: that’s why people adopt a single point of view, and stick to it in the face of almost all opposition. You don’t get good quality discussion from a bunch if people saying There are Arguments on Both Sides,
“Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments. M&S also plead for the “rehabilitation” of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.”
Societies have systems and structures in place for ameliorating and leveraging confirmation bias. For instance, replication and off crosschecking in science ameliorate the tendency of research groups succumb to bias. Adversarial legal processes and party politics leverage the tendency, in ordered get good arguments made for both sides of a question. Values such as speaking ones mind (as opposed to agreeing with leaders), offering and accepting criticism also support rationality.
Now, teaching rationality, in the sense of learning to personally overcome bias has a problem in that it may not be possible to do fully, and it has a further problem in that it may not be a good idea. Teaching someone to overcome confirmation bias in a sense , to see two or more sides to the story, is, in a sense, teaching them to internalise the process of argument, to be solo rationalists. And while society perhaps needs some people like these, it perhaps also doesn’t need many. Forms of solo rationality training have existed for a long time, eg philosophy, but they are do most suit a lot of people’s preferences, and not a lot of people can succeed at them, since they are cognitively difficult
If you plug solo ration Ists into systems designed for the standard human, you are likely to get an impedance mismatch, not improved rationality. If you wanted to increase overall rationality by increasing average rationality, assuming that is feasible in the first place, you would have to redesign systems. But you could probably increase overall rationality by improving systems anyway...we live in a world where medicine, lf all things, isnt routinely based on good quality evidence
Some expansion of point 4 If biases are so harmful, why don’t they get selected out?
“During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability calculators,” they go on to claim that “when people are given information in a format that meshes with the way they naturally think about probability,”(Pinker 1997, 347, 351) the inevitable illusions turn out to be, to use Gerd Gigerenzer memorable term, “evitable” (Gigerenzer 1998). Indeed in many cases, evolutionary psychologists claim that the illusions simply “disappear” (Gigerenzer 1991).” http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/Wars/wars.html