Does belief quantization explain (some amount of) polarization?
Suppose people generally do Bayesian updating on beliefs. It seems plausible that most people (unless trained to do otherwise) subconsciosuly quantize their beliefs—let’s say, for the sake of argument, by rounding to the nearest 1%. In other words, if someone’s posterior on a statement is 75.2%, it will be rounded to 75%.
Consider questions that exhibit group-level polarization (e.g. on climate change, or the morality of abortion, or whatnot) and imagine that there is a series of “facts” that are floating around that someone uninformed doesn’t know about.
If one is exposed to facts in a randomly chosen order, then one will arrive at some reasonable posterior after all facts have been processed—in fact we can use this as a computational definition of the what it would be rational to conclude.
However, suppose that you are exposed to the facts that support the in-group position first (e.g. when coming of age in your own tribe) and the ones that contradict it later (e.g. when you leave the nest.) If your in-group is chronologically your first source of intel, this is plausible. In this case, if you update on sufficiently many supportive facts of the in-group stance, and you quantize, you’ll end up with a 100% belief on the in-group stance (or, conversely, a 0% belief on the out-group stance), after which point you will basically be unmoved by any contradictory facts you may later be exposed to (since you’re locked into full and unshakeable conviction by quantization).
One way to resist this is to refuse to ever be fully convinced of anything. However, this comes at a cost, since it’s cognitively expensive to hold onto very small numbers, and to intuitively update them well.
Does belief quantization explain (some amount of) polarization?
Suppose people generally do Bayesian updating on beliefs. It seems plausible that most people (unless trained to do otherwise) subconsciosuly quantize their beliefs—let’s say, for the sake of argument, by rounding to the nearest 1%. In other words, if someone’s posterior on a statement is 75.2%, it will be rounded to 75%.
Consider questions that exhibit group-level polarization (e.g. on climate change, or the morality of abortion, or whatnot) and imagine that there is a series of “facts” that are floating around that someone uninformed doesn’t know about.
If one is exposed to facts in a randomly chosen order, then one will arrive at some reasonable posterior after all facts have been processed—in fact we can use this as a computational definition of the what it would be rational to conclude.
However, suppose that you are exposed to the facts that support the in-group position first (e.g. when coming of age in your own tribe) and the ones that contradict it later (e.g. when you leave the nest.) If your in-group is chronologically your first source of intel, this is plausible. In this case, if you update on sufficiently many supportive facts of the in-group stance, and you quantize, you’ll end up with a 100% belief on the in-group stance (or, conversely, a 0% belief on the out-group stance), after which point you will basically be unmoved by any contradictory facts you may later be exposed to (since you’re locked into full and unshakeable conviction by quantization).
One way to resist this is to refuse to ever be fully convinced of anything. However, this comes at a cost, since it’s cognitively expensive to hold onto very small numbers, and to intuitively update them well.
See also different practitioners in the same field with very different methodologies they are sure is the Best Way To Do Things