Typically arguments on that kind of topic contain huge number of potentially sloppy inference steps with each step having rather low probability of being valid, leading up to a very very low probability of correctness of the argument (we’re speaking in the range of 10^-20 easily). It’s incredibly easy to make evidence so weak it is not worth the paper it is written on. Furthermore even dramatically raising probability of validity of each step doesn’t make the result worthwhile, but leads to massive overestimation of the probability of correctness of the argument because people fail at exponents. Actually I think the biggest failure of the LWism is the ideology of expecting updates on arguments with probabilities in the range well below 10^-10 , people just fail imagining just how low the probability of a conjunction can get and/or don’t multiply because of residual belief of some common mode correctness as if it was oracle speaking.
Typically arguments on that kind of topic contain huge number of potentially sloppy inference steps with each step having rather low probability of being valid, leading up to a very very low probability of correctness of the argument (we’re speaking in the range of 10^-20 easily). It’s incredibly easy to make evidence so weak it is not worth the paper it is written on. Furthermore even dramatically raising probability of validity of each step doesn’t make the result worthwhile, but leads to massive overestimation of the probability of correctness of the argument because people fail at exponents. Actually I think the biggest failure of the LWism is the ideology of expecting updates on arguments with probabilities in the range well below 10^-10 , people just fail imagining just how low the probability of a conjunction can get and/or don’t multiply because of residual belief of some common mode correctness as if it was oracle speaking.