No it is not. If you put different numbers into the prior, then the probability produced by a bayesian update will always be different. If the evidence is strong enough, it might not matter too much. But if one of the priors is many orders of magnitude difference, then it matters quite a lot.
If people have different priors both for the hypothesis and for the evidence, it is obvious, as Lumifer said, that those can combine to give the same posterior for the hypothesis, given the evidence, since I can make the posterior any value I like by setting the prior for the evidence appropriately.
You don’t get to set the prior for the evidence. Your prior distribution over the evidence is determined by your prior over the hypotheses by P(E) = the sum of P(E|H)P(H) over all hypotheses H. For each H, the distribution P(E|H) over all E is what H is.
But the point holds, that the same evidence can update different priors to identical posteriors. For example, one of the four aces from a deck is selected, not necessarily from a uniform distribution. Person A’s prior distribution for which ace it is is spades 0.4, clubs 0.4, heart 0.1, diamonds 0.1. B’s prior is 0.2, 0.2, 0.3,0.3. Evidence is given to them: the card is red. They reach the same posterior: spades 0, clubs 0, hearts 0.5, diamonds 0.5.
Some might quibble over the use of probabilities of zero, and there may well be a theorem to say that if all the distributions involved are everywhere nonzero the mapping of priors to posteriors is 1-1, but the underlying point will remain in a different form: for any separation between posteriors, however small, and any separation between the priors, however large, some observation is strong enough evidence to transform those priors into posteriors that close. (I have not actually proved this as a theorem, but something along those lines should be true.)
No it is not. If you put different numbers into the prior, then the probability produced by a bayesian update will always be different. If the evidence is strong enough, it might not matter too much. But if one of the priors is many orders of magnitude difference, then it matters quite a lot.
If people have different priors both for the hypothesis and for the evidence, it is obvious, as Lumifer said, that those can combine to give the same posterior for the hypothesis, given the evidence, since I can make the posterior any value I like by setting the prior for the evidence appropriately.
You don’t get to set the prior for the evidence. Your prior distribution over the evidence is determined by your prior over the hypotheses by P(E) = the sum of P(E|H)P(H) over all hypotheses H. For each H, the distribution P(E|H) over all E is what H is.
But the point holds, that the same evidence can update different priors to identical posteriors. For example, one of the four aces from a deck is selected, not necessarily from a uniform distribution. Person A’s prior distribution for which ace it is is spades 0.4, clubs 0.4, heart 0.1, diamonds 0.1. B’s prior is 0.2, 0.2, 0.3,0.3. Evidence is given to them: the card is red. They reach the same posterior: spades 0, clubs 0, hearts 0.5, diamonds 0.5.
Some might quibble over the use of probabilities of zero, and there may well be a theorem to say that if all the distributions involved are everywhere nonzero the mapping of priors to posteriors is 1-1, but the underlying point will remain in a different form: for any separation between posteriors, however small, and any separation between the priors, however large, some observation is strong enough evidence to transform those priors into posteriors that close. (I have not actually proved this as a theorem, but something along those lines should be true.)