Zero and one are probabilities. The apparent opposite claim is a hyperbole intended to communicate something else, but people on LessWrong persistently make the mistake of taking it literally. For examples of 0 and 1 appearing unavoidably in the theory of probability, P(A|A) =1 and P(A|-A)=0. If someone disputes either of these formulae, the onus is on them to rebuild probability theory in a way that avoids them. As far as I know, no-one has even attempted this.
But P(A|B) = P(A&B)/P(B) for any positive value of P(B). You can condition on evidence all day without ever needing to assert a certainty about anything. Your conclusions will all be hypothetical, of the form “if this is the prior over A and this B is the evidence, this is the posterior over A”. If the evidence is uncertain, this can be incorporated into the calculation, giving conclusions of the form “given this prior over A and this probability distribution over possible evidence B, this is the posterior over A.”
If you are uncertain even of the probability distribution over B, then a hard-core Bayesian will say that that uncertainty is modelled by a distribution over distributions of B, which can be folded down into a distribution over B. Soft-core Bayesians will scoff at this, and turn to magic, a.k.a. model checking, human understanding, etc. Hard-core Bayesians will say that these only work to the extent that they approximate to Bayesian inference. Soft-core Bayesians aren’t listening at this point, but if they were they might challenge the hard-core Bayesians to produce an actual method that works better.
Zero and one are probabilities. The apparent opposite claim is a hyperbole intended to communicate something else, but people on LessWrong persistently make the mistake of taking it literally. For examples of 0 and 1 appearing unavoidably in the theory of probability, P(A|A) =1 and P(A|-A)=0. If someone disputes either of these formulae, the onus is on them to rebuild probability theory in a way that avoids them. As far as I know, no-one has even attempted this.
But P(A|B) = P(A&B)/P(B) for any positive value of P(B). You can condition on evidence all day without ever needing to assert a certainty about anything. Your conclusions will all be hypothetical, of the form “if this is the prior over A and this B is the evidence, this is the posterior over A”. If the evidence is uncertain, this can be incorporated into the calculation, giving conclusions of the form “given this prior over A and this probability distribution over possible evidence B, this is the posterior over A.”
If you are uncertain even of the probability distribution over B, then a hard-core Bayesian will say that that uncertainty is modelled by a distribution over distributions of B, which can be folded down into a distribution over B. Soft-core Bayesians will scoff at this, and turn to magic, a.k.a. model checking, human understanding, etc. Hard-core Bayesians will say that these only work to the extent that they approximate to Bayesian inference. Soft-core Bayesians aren’t listening at this point, but if they were they might challenge the hard-core Bayesians to produce an actual method that works better.