Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong’s analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high).
I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).
In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.
Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong’s analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high).
I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).
In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.