Shortly, because if someone believes in a single incorrect thing (also a quite big one that can’t be overlooked), it increases the probability of him believing in other incorrect things.
The long explanation is that if the author believes in religion (something that’s wrong—I know I might be wrong about it, but for reasons too long and irrelevant to write, I act as if I was certain that God doesn’t exist), not only that he had tens of years of exposure to information and that didn’t change his mind about it, but he actually believed in religion in the first place—following a conclusion from premises that don’t lead to that conclusion. This is a (not necessarily strong) counter-indicator of ability to perceive the truth (I’m not sure if “critical thinking” is a synonym to that). Although it is entirely possible that he is in fact absolutely right, the expected outcome (I can’t explain “expected outcome”, I even started a thread on that—I trust on the reader to know what is it, otherwise the debate usually cannot happen) is that he is more likely to have flaws. Especially in a subject where he might follow anecdotal evidence and neglect the importance of scientific approach (some subjects might be more intuitive).
Of course, if I’ve read the book, all the information about the author becomes irrelevant, because it only serves to predict the content of the book. However, reading the book requires many hours (especially in my case—I currently desperately need time, and I tend to be inefficient in it, even when I’m not procrastinating).
P.S. Sorry if you can’t understand how I use expected value to get to that conclusion, I just cannot explain it intuitively and I even (small) doubts about the validity of that approach, because I’m yet to see evidence of it.
I think you’re likely overweighting this, at least in the general case.
It’s hard to overestimate how good people are at selectively interpreting, and more importantly compartmentalizing, evidence to fit their identities. Now, selective interpretation alone would support your line of thinking—if people accept only those data points that fit some preconceived notions, then of course their opinions aren’t good evidence for anything related to those ideas, and religion theoretically touches just about everything. But when you take compartmentalization into account, it becomes possible—even likely—for people to hold sweeping irrational beliefs without significantly damaging their reasoning abilities on questions more than a couple of inferential steps away: inference isn’t ignored, it just isn’t propagated all the way through a network of beliefs.
If I’m considering a book by some author whom I know to follow a religion with strong views on, say, eating crustaceans, then I can safely discount any arguments against crab-eating that I expect to find in that book. But highly abstract topics are probably relatively untainted, unless the author’s religion likewise incorporates a position on those topics into its group identity.
I didn’t give information on how much priority did I put on the author’s religion, but it’s relatively low, because I’ve seen some quite rational religious people. Also I’m not sure about the significance of the correlation between
The issue I have with the author’s religion isn’t about the fact that his religion might prevent him from accepting certain bits of knowledge. It’s because he believes in religion in the first place—this had negative implications on his personality - I’m talking mostly about Keith Stanovich’s dysrationalia, but it also says that he isn’t a strict follower of the scientific approach. Truly, he’s born in 1900 when that wasn’t so popular, but the fact still remains.
How is religiosity of the author relevant?
Shortly, because if someone believes in a single incorrect thing (also a quite big one that can’t be overlooked), it increases the probability of him believing in other incorrect things.
The long explanation is that if the author believes in religion (something that’s wrong—I know I might be wrong about it, but for reasons too long and irrelevant to write, I act as if I was certain that God doesn’t exist), not only that he had tens of years of exposure to information and that didn’t change his mind about it, but he actually believed in religion in the first place—following a conclusion from premises that don’t lead to that conclusion. This is a (not necessarily strong) counter-indicator of ability to perceive the truth (I’m not sure if “critical thinking” is a synonym to that). Although it is entirely possible that he is in fact absolutely right, the expected outcome (I can’t explain “expected outcome”, I even started a thread on that—I trust on the reader to know what is it, otherwise the debate usually cannot happen) is that he is more likely to have flaws. Especially in a subject where he might follow anecdotal evidence and neglect the importance of scientific approach (some subjects might be more intuitive).
Of course, if I’ve read the book, all the information about the author becomes irrelevant, because it only serves to predict the content of the book. However, reading the book requires many hours (especially in my case—I currently desperately need time, and I tend to be inefficient in it, even when I’m not procrastinating).
P.S. Sorry if you can’t understand how I use expected value to get to that conclusion, I just cannot explain it intuitively and I even (small) doubts about the validity of that approach, because I’m yet to see evidence of it.
I think you’re likely overweighting this, at least in the general case.
It’s hard to overestimate how good people are at selectively interpreting, and more importantly compartmentalizing, evidence to fit their identities. Now, selective interpretation alone would support your line of thinking—if people accept only those data points that fit some preconceived notions, then of course their opinions aren’t good evidence for anything related to those ideas, and religion theoretically touches just about everything. But when you take compartmentalization into account, it becomes possible—even likely—for people to hold sweeping irrational beliefs without significantly damaging their reasoning abilities on questions more than a couple of inferential steps away: inference isn’t ignored, it just isn’t propagated all the way through a network of beliefs.
If I’m considering a book by some author whom I know to follow a religion with strong views on, say, eating crustaceans, then I can safely discount any arguments against crab-eating that I expect to find in that book. But highly abstract topics are probably relatively untainted, unless the author’s religion likewise incorporates a position on those topics into its group identity.
I didn’t give information on how much priority did I put on the author’s religion, but it’s relatively low, because I’ve seen some quite rational religious people. Also I’m not sure about the significance of the correlation between
The issue I have with the author’s religion isn’t about the fact that his religion might prevent him from accepting certain bits of knowledge. It’s because he believes in religion in the first place—this had negative implications on his personality - I’m talking mostly about Keith Stanovich’s dysrationalia, but it also says that he isn’t a strict follower of the scientific approach. Truly, he’s born in 1900 when that wasn’t so popular, but the fact still remains.
Presumably because the OP believes that if the author is religious he is less likely to be correct about other things.