The universe is so complex that it is exceedingly unlikely that you will get it right on the first try—and in order to err and err and err again you can’t just stick to one theory and latch on to it through fire and snow no matter what befalls you. While confirmation probably makes sense at the local level, given the limited time and resources that you mentioned, it doesn’t make sense on greater scales where the stakes are so high that it’s worth it being absolutely right rather than being efficient in finding an interim solution. The issues commonly discussed on LessWrong tend to be of the latter category, and so confirmation bias is indeed a bias here.
Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong’s analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high).
I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).
In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.
Hello!
The universe is so complex that it is exceedingly unlikely that you will get it right on the first try—and in order to err and err and err again you can’t just stick to one theory and latch on to it through fire and snow no matter what befalls you. While confirmation probably makes sense at the local level, given the limited time and resources that you mentioned, it doesn’t make sense on greater scales where the stakes are so high that it’s worth it being absolutely right rather than being efficient in finding an interim solution. The issues commonly discussed on LessWrong tend to be of the latter category, and so confirmation bias is indeed a bias here.
Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong’s analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high).
I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).
In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.