Bayesians care a lot about unfalsifiability, a theory can only gain probability mass by assigning low probabilities to some outcomes (if you don’t believe me then go read Eliezer’s technical explanation of technical explanation).
To be more precise (and more correct) we should say that it can gain probability mass, but only when more precise hypotheses are falsified.
If I think a coin is either fair or biased toward heads, and then it comes up tails three times, it’s probably fair.
To be more precise (and more correct) we should say that it can gain probability mass, but only when more precise hypotheses are falsified.
If I think a coin is either fair or biased toward heads, and then it comes up tails three times, it’s probably fair.