Another thing I feel like I see a lot on LW is disagreements where there’s a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they’re just substituting the thumb for whatever parts of their reasoning or intuition they’re not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.
Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.
Another thing I feel like I see a lot on LW is disagreements where there’s a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they’re just substituting the thumb for whatever parts of their reasoning or intuition they’re not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.
Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.