The confirmation bias is a widely discussed cognitive phenomenon that has drawn considerable attention in psychology and behavioural economics.
Traditionally, the search for confirmatory information has been portrayed as a bias, suggesting a systematic deviation from rational information processing. However, upon closer examination, this characterisation lacks a solid foundation in a precise understanding of what truly constitutes the best information processing strategy.
Optimal information acquisition involves making the best use of limited time and resources. In this context, the optimal strategy to acquire information can require looking for confirmatory evidence that aligns with one’s pre-existing beliefs. This point has been made by several researchers, and recently by Zhong (Optimal Dynamic Information Acquisition, Econometrica, 2022) in a general framework where a decision-maker can fully flexibly decide the type of information to collect under two constraints: more informative signals are more costly and waiting longer to collect information is costly. Zhong’s result is that the decision-maker’s optimal strategy is to look for confirmatory information in the form of a Poisson process that gives from time to time strong signals confirming the decision-maker’s prior belief (when this prior belief is right).
I develop this general point in the Substack post linked. Another interesting post on this topic is Klein’s The Curious Case of the Confirmation Bias, that presents older criticisms about the psychological evidence about the confirmation bias.
In the end, it looks like the notion of “confirmation bias” has been at best overused, at worst it may be a pure misconception to think of it as a bias.
Hello!
The universe is so complex that it is exceedingly unlikely that you will get it right on the first try—and in order to err and err and err again you can’t just stick to one theory and latch on to it through fire and snow no matter what befalls you. While confirmation probably makes sense at the local level, given the limited time and resources that you mentioned, it doesn’t make sense on greater scales where the stakes are so high that it’s worth it being absolutely right rather than being efficient in finding an interim solution. The issues commonly discussed on LessWrong tend to be of the latter category, and so confirmation bias is indeed a bias here.
Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong’s analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high).
I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).
In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.
I think a useful heuristic for updating beliefs is to ask yourself “What would make this belief false?” rather than casting the issue in the framework of confirmation vs. balance. To make this concrete, consider the example of flat earthers vs. scientists. If you believed in a flat earth, there are any number of tests you could do to (e.g. watching ships sail down below the horizon) that would lead you to update towards falsifying your belief. This type of information seeking is neutral with respect to confirming your beliefs. This also allows us to look for more direct evidence around our beliefs rather than appealing to indirect methods such as whether or not a person agrees with us (see hug the query).
Second, I haven’t looked into the work of Weijie Zhong, but I was wondering if there might be a bias variance tradeoff at play here for efficient information seeking (i.e. obtaining only confirmatory evidence seems likely lead to low variance but high bias)?
I believe that Russell’s teapot does not exist.
On point 2, interesting question about bias-variance. His model looks at beliefs moving in the range 0-1. The world is either 0 or 1. The question is what kind of flow of information will allow you to make the likely right decision in the minimal amount of time.
On point 1, I think Zhong’s framework is general enough to cover the examples you give. If you can choose the type of information to collect very flexibly, and if more informative signals are more costly, it makes more sense to look for confirmation because, given your beliefs, you are more likely to quickly be confident enough to act on your beliefs. Contrarian or neutral sources are useful, but in expectation, given your beliefs, they would require you to take more time before making a decision.
You might be interested in Gigerenzer’s “bias bias” paper (reviewed here):
An example from the paper: