How has nobody yet mentioned Confidence Levels Inside and Outside an Argument? Anyway, I take gjm’s line on this: I should assign at least 5% probability that my reasoning for rejecting Christianity is invalid in some way that I’m unaware of, but that’s different from a 5% probability that Christianity is true.
For one thing, my reasoning that Christianity is very likely false is essentially the same as my reasoning that other theisms are false, so at the very least a whole bunch of other religions get lumped into that same 5%. (That is, if Zeus exists, then my reasoning was just as wrong as if Yahweh exists.) Furthermore, there are other possibilities—such as that I’m mentally unbalanced and hallucinating my high intelligence, or that I’m a brain in a vat whose reasoning is being systematically tweaked for a mad scientific experiment—which don’t seem all that correlated with whether Christianity is true or not.
Since in my case as well as yours, Christianity plays a unique role vis-a-vis other theisms, there is some justification for promoting it a bit within the space of “things that might actually be the case if my reasoning is bad” (in the same sense that a lottery winner might well consider it more likely than average that they’re in a simulation). But my estimate still comes out more like 0.1% than like 5%.
I didn’t write it explicitly, but my 99% answer was based on reasoning like: there is a 1% chance that my way of reasoning is invalid because of something I am not aware of. That means, instead of 1% chance meeting Zeus, I estimate a 1% chance of some argument that could change my reasoning about the subject. In other words, it’s not 1% chance that I am wrong about this, but rather 1% chance I am irrational about this.
Typically arguments on that kind of topic contain huge number of potentially sloppy inference steps with each step having rather low probability of being valid, leading up to a very very low probability of correctness of the argument (we’re speaking in the range of 10^-20 easily). It’s incredibly easy to make evidence so weak it is not worth the paper it is written on. Furthermore even dramatically raising probability of validity of each step doesn’t make the result worthwhile, but leads to massive overestimation of the probability of correctness of the argument because people fail at exponents. Actually I think the biggest failure of the LWism is the ideology of expecting updates on arguments with probabilities in the range well below 10^-10 , people just fail imagining just how low the probability of a conjunction can get and/or don’t multiply because of residual belief of some common mode correctness as if it was oracle speaking.
How has nobody yet mentioned Confidence Levels Inside and Outside an Argument? Anyway, I take gjm’s line on this: I should assign at least 5% probability that my reasoning for rejecting Christianity is invalid in some way that I’m unaware of, but that’s different from a 5% probability that Christianity is true.
For one thing, my reasoning that Christianity is very likely false is essentially the same as my reasoning that other theisms are false, so at the very least a whole bunch of other religions get lumped into that same 5%. (That is, if Zeus exists, then my reasoning was just as wrong as if Yahweh exists.) Furthermore, there are other possibilities—such as that I’m mentally unbalanced and hallucinating my high intelligence, or that I’m a brain in a vat whose reasoning is being systematically tweaked for a mad scientific experiment—which don’t seem all that correlated with whether Christianity is true or not.
Since in my case as well as yours, Christianity plays a unique role vis-a-vis other theisms, there is some justification for promoting it a bit within the space of “things that might actually be the case if my reasoning is bad” (in the same sense that a lottery winner might well consider it more likely than average that they’re in a simulation). But my estimate still comes out more like 0.1% than like 5%.
I didn’t write it explicitly, but my 99% answer was based on reasoning like: there is a 1% chance that my way of reasoning is invalid because of something I am not aware of. That means, instead of 1% chance meeting Zeus, I estimate a 1% chance of some argument that could change my reasoning about the subject. In other words, it’s not 1% chance that I am wrong about this, but rather 1% chance I am irrational about this.
Typically arguments on that kind of topic contain huge number of potentially sloppy inference steps with each step having rather low probability of being valid, leading up to a very very low probability of correctness of the argument (we’re speaking in the range of 10^-20 easily). It’s incredibly easy to make evidence so weak it is not worth the paper it is written on. Furthermore even dramatically raising probability of validity of each step doesn’t make the result worthwhile, but leads to massive overestimation of the probability of correctness of the argument because people fail at exponents. Actually I think the biggest failure of the LWism is the ideology of expecting updates on arguments with probabilities in the range well below 10^-10 , people just fail imagining just how low the probability of a conjunction can get and/or don’t multiply because of residual belief of some common mode correctness as if it was oracle speaking.