It’s because (“Theism is evidence that a person is irrational.”) theism is evidence that is very easily screened off by other easily noted characteristics. Suppose for the sake of argument that given no other knowledge about a person than their label “theist” rather than “atheist” it is more likely they will be wrong about some other subject than if their label was “atheist”.
Fine and dandy, but trace the flow of evidence through the causal diagram: “theist” is less likely given that “they’re rational”, so now it’s more likely that “they’re irrational”. In particular, by irrational here I mean they have some set of cognitive algorithms not shared by all humans which makes them wrong about many subjects. This then directly propagates evidence that they will be wrong about some other specific subject. But it is screened off by evidence that these counterfactual cognitive algorithms do not in fact make them be wrong about that other specific subject. And that evidence is readily gathered by reading a post or two of theirs on the subject in question.
You could’ve said it simpler: Reading a couple of essays on the subject written by a person is more informative about whether the person is reasonable about that subject than learning whether the person is a theist.
Well, that’s true, but it misses the point that not only is the “reading essays” evidence more informative than the “theist” evidence, the former radically changes how you should update on the latter. If most of the probability flow from “theist” to “wrong about other subject” flows through the bit that “reading essays” makes improbable, then to make up arbitrary exaggerated numbers with the right qualitative behavior:
It’s because (“Theism is evidence that a person is irrational.”) theism is evidence that is very easily screened off by other easily noted characteristics. Suppose for the sake of argument that given no other knowledge about a person than their label “theist” rather than “atheist” it is more likely they will be wrong about some other subject than if their label was “atheist”.
Fine and dandy, but trace the flow of evidence through the causal diagram: “theist” is less likely given that “they’re rational”, so now it’s more likely that “they’re irrational”. In particular, by irrational here I mean they have some set of cognitive algorithms not shared by all humans which makes them wrong about many subjects. This then directly propagates evidence that they will be wrong about some other specific subject. But it is screened off by evidence that these counterfactual cognitive algorithms do not in fact make them be wrong about that other specific subject. And that evidence is readily gathered by reading a post or two of theirs on the subject in question.
You could’ve said it simpler:
Reading a couple of essays on the subject written by a person is more informative about whether the person is reasonable about that subject than learning whether the person is a theist.
Well, that’s true, but it misses the point that not only is the “reading essays” evidence more informative than the “theist” evidence, the former radically changes how you should update on the latter. If most of the probability flow from “theist” to “wrong about other subject” flows through the bit that “reading essays” makes improbable, then to make up arbitrary exaggerated numbers with the right qualitative behavior:
log(P(wrong|theist)/P(wrong|~theist)) = L(wrong|theist) = 0.1
L(wrong|reasonableessays) = −1.0
L(wrong|theist&reasonableessays) = −0.99 rather than −0.9.