What’s the name of the proto-fallacy that goes like “you should exchange your oranges for pears because then you’ll have more pears”, suggesting that the question can be resolved, or has already been resolved, without ever considering the relative value of oranges and pears? I feel like I see it everywhere a lot, including on LW.
Sounds like failing at charity, not trying to figure out what thinking produced a claim/question/behavior and misinterpreting it as a result. In your example, there is an implication of difficulty with noticing the obvious, when the correct explanation is most likely having a different objective, which should be clear if the question is given half a thought. In some cases, running with the literal meaning of a claim as stated is actually a misinterpretation, since it differs from the intended meaning.
Another thing I feel like I see a lot on LW is disagreements where there’s a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they’re just substituting the thumb for whatever parts of their reasoning or intuition they’re not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.
Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.
What’s the name of the proto-fallacy that goes like “you should exchange your oranges for pears because then you’ll have more pears”, suggesting that the question can be resolved, or has already been resolved, without ever considering the relative value of oranges and pears? I feel like I see it everywhere a lot, including on LW.
Sounds like failing at charity, not trying to figure out what thinking produced a claim/question/behavior and misinterpreting it as a result. In your example, there is an implication of difficulty with noticing the obvious, when the correct explanation is most likely having a different objective, which should be clear if the question is given half a thought. In some cases, running with the literal meaning of a claim as stated is actually a misinterpretation, since it differs from the intended meaning.
Another thing I feel like I see a lot on LW is disagreements where there’s a heavy thumb of popularity or reputational costs on one side of the scale, but nobody talks about the thumb, and it makes it hard to tell if people are internally trying to correct for the thumb or if they’re just substituting the thumb for whatever parts of their reasoning or intuition they’re not explicitly talking about, and a lot of what looks like disagreement about the object level arguments that are being presented may actually be disagreement about the thumb. For example, in the case of the parent comment, maybe such a thumb is driving judgments of the relative values of oranges and pears.
Together with my interpretation of the preceding example this suggests an analogy between individual/reference-class charity and filtered evidence. The analogy is interesting as a means of transfering understanding of errors in ordinary charity to the general setting where the salient structure in the sources of evidence could have any nature.
So what usually goes wrong with charity is that the hypotheses about possible kinds of thinking behind an action/claim are not deliberatively considered (or consciously noticed), so the implicit assumption is intuitive, and can occasionally be comically wrong (or at least overconfident) in a way that would be immediately recognized if considered deliberatively. This becomes much worse if failure of charity is a habit, because then the training data for intuition can become systematically bad, dragging down the intuition itself to a point where it starts actively preventing deliberative consideration from being able to work correctly, so the error persists even in the face of being pointed out. If this branches out into the anti-epistemology territory, particularly via memes circulating in a group that justify the wrong intuitions about thinking of members of another group, we get a popular error with a reliably trained cognitive infrastructure for resisting correction.
But indeed this could happen for any kind of working with evidence that needs some Bayes and reasonable hypotheses to stay sane! So a habit of not considering obvious possibilities about origin of evidence risks training systematically wrong intuitions that make noticing their wrongness more difficult. In a group setting, this gets amplified with echo chamber/epistemic bubble effects, which draw their power from the very same error of not getting deliberatively considered as significant forces that shape available evidence.