Some priors are very bad. If a Bayesian somehow ends up with such a prior, they’re SOL because they have no notion of rejecting priors.
There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi’s hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule.
Bayesians cannot reject a prior of 0. They can ‘reject’ a prior of “That’s definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit.” They just wouldn’t write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.
There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi’s hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule.
Bayesians cannot reject a prior of 0. They can ‘reject’ a prior of “That’s definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit.” They just wouldn’t write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.