understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.
Putting that knowledge to use is great, but aside from that shouldn’t we just still reject bias as “evil” for the reason they are systematic errors in reasoning?
Sure, the brain circuitry that produces a bias in a certain context can be adaptive or promote correctness in another. I don’t think anyone doubts here that every bias has an evolutionary reason and history behind it. Evolutionary trade-offs and adaptive advantages can both lead to the rise of things we call biases now. What worked in an ancestral environment might not work now, etc.
“Rejecting” a bias as evil ideally requires that the bias is a compactly definable, reliable pattern over many outcomes. Maybe we’ll acquire some extra knowledge about mechanisms and brain circuitry behind some biases, and we’ll be able to collapse several biases into one, or split them, or just generally refine our understanding, thus aiding debiasing. But I think that a bias itself, as an observed pattern of errors, is bad and stays bad.
Please take note of the wording: “reject all bias as evil”.
That is, lumping all demonstrated instances of bias into a general category of “ugh, I should avoid doing this” is likely to keep us from looking into the interesting adaptive properties of specific biases.
When confronted with a specific bias, the useful thing to do is recognize that it introduces error in particular contexts but may remain adaptive in other contexts. We will then strive to adopt prescriptive approaches, selected according to context, which help correct for observed bias and bring our cognition into line with the desired normative frameworks—which themselves differ from context to context.
I meant that it’s the specific underlying mechanisms that can produce a bias or promote correctness; a bias is just a surface level fact about what errors people tend to make. Also, lots of biases are specific to a certain mental task and cannot be interpreted in foreign contexts. It’s not guaranteed either that a current “bias” concept will not be superseded by additional knowledge. Therefore, the ideal basis of debiasing is most likely a detailed understanding of psychology/neurology; which is a point you expressed, and I agree.
I think this disagreement comes down to the definition of “bias”, which Wikipedia defines as “a tendency or preference towards a particular perspective, ideology or result, when the tendency interferes with the ability to be impartial, unprejudiced, or objective.” If a bias helps you make fewer errors, I would argue it’s not a bias.
Maybe it is clearer if we speak of behaviors rather than biases. A given behavior (e.g. tendency to perceive what you were expecting to perceive) may make you more biased in certain contexts, and more rational in others. It might be advantageous to keep this behavior if it helps you more than it hurts you, but to the extent that you can identify the situations where the behavior causes errors, you should try to correct it.
I understand and accept the premise that biases can be adaptive, and therefore beneficial to success and not evil.
You bring up the idea of normative frameworks, which I like, but don’t expound upon the idea. Which biases, in which frameworks, lead to success? Is this something we can speculate about?
For example, what biases and what framework would be successful for a stock market trader?
I think Kutta is right to suggest that we use the term bias for “a pattern of errors”. What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.
If it is indeed the case that the confirmation bias shown in cognitive studies is produced by the same processes that our perception uses, then confirmation bias could be a good thing for a stock trader, if it lets them identify patterns in market data which are actually there.
The audio sample above would be a good analogy. First you look at the market data and see just a jumble of numbers, up and down, up and down. Then you go, “Hey, doesn’t this look like what I’ve already seen in insider trading cases ?” (Or whatever would make a plausible example—I don’t know much about stock trading.) And now the market data seems to make a lot of sense.
In this hypothesis (and keep it mind it is only a hypothesis) confirmation bias helps you make sense of the data. Being aware of confirmation bias as a source of error reminds you to double-check your initial idea, using more reliable tools (say, mathematical) if you have them.
I think Kutta is right to suggest that we use the term bias for “a pattern of errors”. What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.
The “Heuristics and Biases” party line is to use “heuristic” to refer to the underlying mechanism. For example, “the representativeness heuristic can lead to neglect of base-rate.”
I get the “normative” jargon from Baron’s Thinking and Deciding. In studies of decision making you can roughly break the work down into three categories; descriptive, normative and prescriptive.
The first consists of studying how people actually think; that’s typically the work of the Kahneman and Tverskys of the field.
The second consists of identifying what rules must hold if decisions are to satisfy some desirable properties; for instance Expected Utility is normative in evaluating known alternatives, Probability Theory is normative in assessing probabilities (and even turns out to be normative in comparing scientific hypotheses).
The third is about what we can do to bring our thinking in line with some normative framework. “You should compute the Expected Utility of all your options and pick the highest valued one” may not be an appropriate prescription, for instance if you are in a situation where you have many options but too little time to consciously work out utilities; or in a situation where you have too few options and should instead work on figuring out additional options.
This might be worth writing up as a top-level post for future reference and further discussion.
Putting that knowledge to use is great, but aside from that shouldn’t we just still reject bias as “evil” for the reason they are systematic errors in reasoning?
Sure, the brain circuitry that produces a bias in a certain context can be adaptive or promote correctness in another. I don’t think anyone doubts here that every bias has an evolutionary reason and history behind it. Evolutionary trade-offs and adaptive advantages can both lead to the rise of things we call biases now. What worked in an ancestral environment might not work now, etc.
“Rejecting” a bias as evil ideally requires that the bias is a compactly definable, reliable pattern over many outcomes. Maybe we’ll acquire some extra knowledge about mechanisms and brain circuitry behind some biases, and we’ll be able to collapse several biases into one, or split them, or just generally refine our understanding, thus aiding debiasing. But I think that a bias itself, as an observed pattern of errors, is bad and stays bad.
Please take note of the wording: “reject all bias as evil”.
That is, lumping all demonstrated instances of bias into a general category of “ugh, I should avoid doing this” is likely to keep us from looking into the interesting adaptive properties of specific biases.
When confronted with a specific bias, the useful thing to do is recognize that it introduces error in particular contexts but may remain adaptive in other contexts. We will then strive to adopt prescriptive approaches, selected according to context, which help correct for observed bias and bring our cognition into line with the desired normative frameworks—which themselves differ from context to context.
I meant that it’s the specific underlying mechanisms that can produce a bias or promote correctness; a bias is just a surface level fact about what errors people tend to make. Also, lots of biases are specific to a certain mental task and cannot be interpreted in foreign contexts. It’s not guaranteed either that a current “bias” concept will not be superseded by additional knowledge. Therefore, the ideal basis of debiasing is most likely a detailed understanding of psychology/neurology; which is a point you expressed, and I agree.
I think this disagreement comes down to the definition of “bias”, which Wikipedia defines as “a tendency or preference towards a particular perspective, ideology or result, when the tendency interferes with the ability to be impartial, unprejudiced, or objective.” If a bias helps you make fewer errors, I would argue it’s not a bias.
Maybe it is clearer if we speak of behaviors rather than biases. A given behavior (e.g. tendency to perceive what you were expecting to perceive) may make you more biased in certain contexts, and more rational in others. It might be advantageous to keep this behavior if it helps you more than it hurts you, but to the extent that you can identify the situations where the behavior causes errors, you should try to correct it.
Great audio clip, BTW.
I understand and accept the premise that biases can be adaptive, and therefore beneficial to success and not evil.
You bring up the idea of normative frameworks, which I like, but don’t expound upon the idea. Which biases, in which frameworks, lead to success? Is this something we can speculate about?
For example, what biases and what framework would be successful for a stock market trader?
I think Kutta is right to suggest that we use the term bias for “a pattern of errors”. What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.
If it is indeed the case that the confirmation bias shown in cognitive studies is produced by the same processes that our perception uses, then confirmation bias could be a good thing for a stock trader, if it lets them identify patterns in market data which are actually there.
The audio sample above would be a good analogy. First you look at the market data and see just a jumble of numbers, up and down, up and down. Then you go, “Hey, doesn’t this look like what I’ve already seen in insider trading cases ?” (Or whatever would make a plausible example—I don’t know much about stock trading.) And now the market data seems to make a lot of sense.
In this hypothesis (and keep it mind it is only a hypothesis) confirmation bias helps you make sense of the data. Being aware of confirmation bias as a source of error reminds you to double-check your initial idea, using more reliable tools (say, mathematical) if you have them.
The “Heuristics and Biases” party line is to use “heuristic” to refer to the underlying mechanism. For example, “the representativeness heuristic can lead to neglect of base-rate.”
I get the “normative” jargon from Baron’s Thinking and Deciding. In studies of decision making you can roughly break the work down into three categories; descriptive, normative and prescriptive.
The first consists of studying how people actually think; that’s typically the work of the Kahneman and Tverskys of the field.
The second consists of identifying what rules must hold if decisions are to satisfy some desirable properties; for instance Expected Utility is normative in evaluating known alternatives, Probability Theory is normative in assessing probabilities (and even turns out to be normative in comparing scientific hypotheses).
The third is about what we can do to bring our thinking in line with some normative framework. “You should compute the Expected Utility of all your options and pick the highest valued one” may not be an appropriate prescription, for instance if you are in a situation where you have many options but too little time to consciously work out utilities; or in a situation where you have too few options and should instead work on figuring out additional options.
This might be worth writing up as a top-level post for future reference and further discussion.