From the subreddit: Humans Are Hardwired To Dismiss Facts That Don’t Fit Their Worldview. Once you get through the preliminary Trump supporter and anti-vaxxer denunciations, it turns out to be an attempt at an evo psych explanation of confirmation bias:
Our ancestors evolved in small groups, where cooperation and persuasion had at least as much to do with reproductive success as holding accurate factual beliefs about the world. Assimilation into one’s tribe required assimilation into the group’s ideological belief system. An instinctive bias in favor of one’s in-group” and its worldview is deeply ingrained in human psychology.
I think the article as a whole makes good points, but I’m increasingly uncertain that confirmation bias can be separated from normal reasoning.
Suppose that one of my friends says she saw a coyote walk by her house in Berkeley. I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.
Now suppose that same friend says she saw a polar bear walk by her house. I assume she is mistaken, lying, or hallucinating.
Is this confirmation bias? It sure sounds like it. When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it. What am I doing differently from an anti-vaxxer who rejects any information that challenges her preexisting beliefs (eg that vaccines cause autism)?
When new evidence challenges our established priors (eg a friend reports a polar bear, but I have a strong prior that there are no polar bears around), we ought to heavily discount the evidence and slightly shift our prior. So I should end up believing that my friend is probably wrong, but I should also be slightly less confident in my assertion that there are no polar bears loose in Berkeley today. This seems sufficient to explain confirmation bias, ie a tendency to stick to what we already believe and reject evidence against it.
The anti-vaxxer is still doing something wrong; she somehow managed to get a very strong prior on a false statement, and isn’t weighing the new evidence heavily enough. But I think it’s important to note that she’s attempting to carry out normal reasoning, and failing, rather than carrying out some special kind of reasoning called “confirmation bias”.
There are some important refinements to make to this model – maybe there’s a special “emotional reasoning” that locks down priors more tightly, and maybe people naturally overweight priors because that was adaptive in the ancestral environment. Maybe after you add these refinements, you end up at exactly the traditional model of confirmation bias (and the one the Fast Company article is using) and my objection becomes kind of pointless.
But not completely pointless. I still think it’s helpful to approach confirmation bias by thinking of it as a normal form of reasoning, and then asking under what conditions it fails.
Not as far as I know. Wikipedia gives three aspects of confirmation bias:
Biased search: seeking out stories about coyotes but not polar bears.
Biased interpretation: hearing an unknown animal rustle in the bushes, and treating that as additional evidence that coyotes outnumber polar bears.
Biased recall: remembering coyote encounters more readily than polar bear encounters.
All of those seem different from your example, and none are valid Bayesian reasoning.
Some forms of biased recall are Bayesian. This is because “recall” is actually a process of reconstruction from noisy data, so naturally priors play a role.
Here’s a fun experiment showing how people’s priors on fruit size (pineapples > apples > raspberries …) influenced their recollection of synthetic images where the sizes were manipulated: A Bayesian Account of Reconstructive Memory
I think this framework captures about half of the examples of biased recall mentioned in the Wikipedia article.
Related: Jess Whittlestone’s PhD thesis, titled “The importance of making assumptions: why confirmation is not necessarily a bias.”
As Stuart previously recognized with the anchoring bias, it’s probably worth keeping in mind that any bias is likely only a “bias” against some normative backdrop. Without some way reasoning was supposed to turn out, there are no biases, only the way things happened to work.
Thus things look confusing around confirmation bias, because it only becomes bias when it results in reason that produces a result that doesn’t predict reality after the fact. Otherwise it’s just correct reasoning based on priors.
See also Mercier & Sperber 2011 on confirmation bias:
I often found myself in a situation when I overupdated on the evidence. For example, if market fails 3 per cent, I used to start to think that economic collapse is soon.
Overupdating on random evidence is also a source of some conspiracy theories. A plate number of a car on my street is the same as my birthday? They must be watching me!
The protection trick here is “natural scepticism”: just not update if you want to update your believes. But in this case the prior system becomes too rigid.
(not update if you want to protect your beliefs?, not update if you don’t want to update your beliefs?)
Skepticism isn’t just “not updating”. And protection from what?
I like the general thought here, that some of our decision tools that are supposed to help make better decisions are themselves potentially subject to the same problems we’re trying to avoid.
I wonder if the pathway might not be about the priors around the event (polar bears in Berkeley ) but an update about the reliability of the evidence presented—your friend reporting the polar bear gets your priors on her reliability/honesty down graded thus helping to confirm the original prior about polar bears in Berkeley.
While “isolated demands for rigor” may be suspect, an outlier could be the result of high measurement error* or model failure. (Though people may be systematically overconfident in their models.)
*Which has implications for the model—the data thought previously correct may contain smaller amounts of error.