I think a useful heuristic for updating beliefs is to ask yourself “What would make this belief false?” rather than casting the issue in the framework of confirmation vs. balance. To make this concrete, consider the example of flat earthers vs. scientists. If you believed in a flat earth, there are any number of tests you could do to (e.g. watching ships sail down below the horizon) that would lead you to update towards falsifying your belief. This type of information seeking is neutral with respect to confirming your beliefs. This also allows us to look for more direct evidence around our beliefs rather than appealing to indirect methods such as whether or not a person agrees with us (see hug the query).
Second, I haven’t looked into the work of Weijie Zhong, but I was wondering if there might be a bias variance tradeoff at play here for efficient information seeking (i.e. obtaining only confirmatory evidence seems likely lead to low variance but high bias)?
On point 2, interesting question about bias-variance. His model looks at beliefs moving in the range 0-1. The world is either 0 or 1. The question is what kind of flow of information will allow you to make the likely right decision in the minimal amount of time.
On point 1, I think Zhong’s framework is general enough to cover the examples you give. If you can choose the type of information to collect very flexibly, and if more informative signals are more costly, it makes more sense to look for confirmation because, given your beliefs, you are more likely to quickly be confident enough to act on your beliefs. Contrarian or neutral sources are useful, but in expectation, given your beliefs, they would require you to take more time before making a decision.
I think a useful heuristic for updating beliefs is to ask yourself “What would make this belief false?” rather than casting the issue in the framework of confirmation vs. balance. To make this concrete, consider the example of flat earthers vs. scientists. If you believed in a flat earth, there are any number of tests you could do to (e.g. watching ships sail down below the horizon) that would lead you to update towards falsifying your belief. This type of information seeking is neutral with respect to confirming your beliefs. This also allows us to look for more direct evidence around our beliefs rather than appealing to indirect methods such as whether or not a person agrees with us (see hug the query).
Second, I haven’t looked into the work of Weijie Zhong, but I was wondering if there might be a bias variance tradeoff at play here for efficient information seeking (i.e. obtaining only confirmatory evidence seems likely lead to low variance but high bias)?
I believe that Russell’s teapot does not exist.
On point 2, interesting question about bias-variance. His model looks at beliefs moving in the range 0-1. The world is either 0 or 1. The question is what kind of flow of information will allow you to make the likely right decision in the minimal amount of time.
On point 1, I think Zhong’s framework is general enough to cover the examples you give. If you can choose the type of information to collect very flexibly, and if more informative signals are more costly, it makes more sense to look for confirmation because, given your beliefs, you are more likely to quickly be confident enough to act on your beliefs. Contrarian or neutral sources are useful, but in expectation, given your beliefs, they would require you to take more time before making a decision.