My impression is that implicitly relying on arbitrary precision of a prior can give updates that are diametrically opposed to the ones you’d get with different, but arbitrarily similar priors.
I’m not sure what the “precision of a prior” means. A prior is an expression of the knowledge you have before obtaining the data. It is not something that is a measurement of something else, which it could be a more or less precise measurement of.
Has anyone produced a scenario in which the brittleness phenomenon arises in realistic practice?
Precision is the reciprocal of the variance. In other words, you can use it as a measure of spread. If you are relatively certain that the true value of a parameter is in a narrow range, your prior will have low variance / high precision. If you think the true value may lie in a broader range, you have high variance / low precision.
I’m not sure what the “precision of a prior” means. A prior is an expression of the knowledge you have before obtaining the data. It is not something that is a measurement of something else, which it could be a more or less precise measurement of.
Has anyone produced a scenario in which the brittleness phenomenon arises in realistic practice?
Precision is the reciprocal of the variance. In other words, you can use it as a measure of spread. If you are relatively certain that the true value of a parameter is in a narrow range, your prior will have low variance / high precision. If you think the true value may lie in a broader range, you have high variance / low precision.