I think this idea is overrated by LWers. It’s true that if you make an argument that P(A) = 1 then it does not follow that P(A) = 1 because you might be wrong. There is nothing really special about 1 here: it’s also true that if you make an argument that P(A) = 2⁄3 then it does not follow that P(A) = 2⁄3 because you might be wrong. The only reason to even mention it is that it’s a common special case: many arguments, in particular most mathematical proofs, do not involve probability, and so their output consists of P(A) = 1 or P(A) = 0; also, mathematical proofs tend to be correct with a very high probability, so P(A|proof of A) is very close to 1.
So does it follow that we should avoid probabilities of 0 and 1 in our reasoning? I don’t think it does, and I think that doing so becomes more and more pointless as your arguments become more and more mathematically rigorous. The concept of 0 and 1 probabilities are just too useful to discard just because someone might get confused. Sure, if you’re manually setting priors for your Bayesian AI, you should be aware that giving a prior of 0 or 1 for a statement means it will never update. But to how many of us is that relevant?
I think this idea is overrated by LWers. It’s true that if you make an argument that P(A) = 1 then it does not follow that P(A) = 1 because you might be wrong. There is nothing really special about 1 here: it’s also true that if you make an argument that P(A) = 2⁄3 then it does not follow that P(A) = 2⁄3 because you might be wrong. The only reason to even mention it is that it’s a common special case: many arguments, in particular most mathematical proofs, do not involve probability, and so their output consists of P(A) = 1 or P(A) = 0; also, mathematical proofs tend to be correct with a very high probability, so P(A|proof of A) is very close to 1.
So does it follow that we should avoid probabilities of 0 and 1 in our reasoning? I don’t think it does, and I think that doing so becomes more and more pointless as your arguments become more and more mathematically rigorous. The concept of 0 and 1 probabilities are just too useful to discard just because someone might get confused. Sure, if you’re manually setting priors for your Bayesian AI, you should be aware that giving a prior of 0 or 1 for a statement means it will never update. But to how many of us is that relevant?
A similar idea is much better explained in Confidence Levels Inside and Outside an Argument. In my opinion, any part of this post that is not also covered there is not worth reading.