Yes. With a small change in words, I convinced myself your logic is not circular:
Evidence supports: Atheism is true and theism is false.
Therefore, atheism is a rational belief (based on evidence) and theism is an irrational belief (not based on evidence).
Given the assumption that rational people tend to hold rational beliefs rather than irrational ones…
Theism is evidence that a person is irrational.
The logic seems OK, but for some reason, I don’t find that satisfying. I don’t feel convinced in the way I usually do when something is true. Can anyone help me identify why? (will only be grateful if you identify an idiosyncratic irrationality)
Later: The reason why I don’t find this satisfying, after thinking about it a while, is because I would like “true” to have more significance. I guess I don’t care if something is true or not if it has no predictive consequence. And I think that that is a rational stance.
It’s because (“Theism is evidence that a person is irrational.”) theism is evidence that is very easily screened off by other easily noted characteristics. Suppose for the sake of argument that given no other knowledge about a person than their label “theist” rather than “atheist” it is more likely they will be wrong about some other subject than if their label was “atheist”.
Fine and dandy, but trace the flow of evidence through the causal diagram: “theist” is less likely given that “they’re rational”, so now it’s more likely that “they’re irrational”. In particular, by irrational here I mean they have some set of cognitive algorithms not shared by all humans which makes them wrong about many subjects. This then directly propagates evidence that they will be wrong about some other specific subject. But it is screened off by evidence that these counterfactual cognitive algorithms do not in fact make them be wrong about that other specific subject. And that evidence is readily gathered by reading a post or two of theirs on the subject in question.
You could’ve said it simpler: Reading a couple of essays on the subject written by a person is more informative about whether the person is reasonable about that subject than learning whether the person is a theist.
Well, that’s true, but it misses the point that not only is the “reading essays” evidence more informative than the “theist” evidence, the former radically changes how you should update on the latter. If most of the probability flow from “theist” to “wrong about other subject” flows through the bit that “reading essays” makes improbable, then to make up arbitrary exaggerated numbers with the right qualitative behavior:
Yes. With a small change in words, I convinced myself your logic is not circular:
Evidence supports: Atheism is true and theism is false. Therefore, atheism is a rational belief (based on evidence) and theism is an irrational belief (not based on evidence).
Given the assumption that rational people tend to hold rational beliefs rather than irrational ones…
Theism is evidence that a person is irrational.
The logic seems OK, but for some reason, I don’t find that satisfying. I don’t feel convinced in the way I usually do when something is true. Can anyone help me identify why? (will only be grateful if you identify an idiosyncratic irrationality)
Later: The reason why I don’t find this satisfying, after thinking about it a while, is because I would like “true” to have more significance. I guess I don’t care if something is true or not if it has no predictive consequence. And I think that that is a rational stance.
It’s because (“Theism is evidence that a person is irrational.”) theism is evidence that is very easily screened off by other easily noted characteristics. Suppose for the sake of argument that given no other knowledge about a person than their label “theist” rather than “atheist” it is more likely they will be wrong about some other subject than if their label was “atheist”.
Fine and dandy, but trace the flow of evidence through the causal diagram: “theist” is less likely given that “they’re rational”, so now it’s more likely that “they’re irrational”. In particular, by irrational here I mean they have some set of cognitive algorithms not shared by all humans which makes them wrong about many subjects. This then directly propagates evidence that they will be wrong about some other specific subject. But it is screened off by evidence that these counterfactual cognitive algorithms do not in fact make them be wrong about that other specific subject. And that evidence is readily gathered by reading a post or two of theirs on the subject in question.
You could’ve said it simpler:
Reading a couple of essays on the subject written by a person is more informative about whether the person is reasonable about that subject than learning whether the person is a theist.
Well, that’s true, but it misses the point that not only is the “reading essays” evidence more informative than the “theist” evidence, the former radically changes how you should update on the latter. If most of the probability flow from “theist” to “wrong about other subject” flows through the bit that “reading essays” makes improbable, then to make up arbitrary exaggerated numbers with the right qualitative behavior:
log(P(wrong|theist)/P(wrong|~theist)) = L(wrong|theist) = 0.1
L(wrong|reasonableessays) = −1.0
L(wrong|theist&reasonableessays) = −0.99 rather than −0.9.