Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.
Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.