komponisto nailed the intuition I was going from: the likelihood ratio is independent of the prior, but an unswayable Bayesian fixes P(E), forcing extreme priors to have extreme likelihood ratios.
*blinks* I think I’m extra confused. The law of conservation of probability is basically just saying that the change in belief may be large or small, so evidence may be strong or weak in that sense. But that doesn’t leave the likelihoods up for grabs, (well, okay, P(E|~H) could depend on how you distribute your belief over the space of hypotheses other than H, but… I’m not sure that was your point)
Okay, point conceded … that still doesn’t generate a result that matches the intuition I had. I need to spend more time on this to figure out what assumptions I’m relying on to claim that “extremely wrong beliefs force quick updates”.
komponisto nailed the intuition I was going from: the likelihood ratio is independent of the prior, but an unswayable Bayesian fixes P(E), forcing extreme priors to have extreme likelihood ratios.
*blinks* I think I’m extra confused. The law of conservation of probability is basically just saying that the change in belief may be large or small, so evidence may be strong or weak in that sense. But that doesn’t leave the likelihoods up for grabs, (well, okay, P(E|~H) could depend on how you distribute your belief over the space of hypotheses other than H, but… I’m not sure that was your point)
Okay, point conceded … that still doesn’t generate a result that matches the intuition I had. I need to spend more time on this to figure out what assumptions I’m relying on to claim that “extremely wrong beliefs force quick updates”.
Remember, though, that even fixing both P(E) and P(H), you can still make the ratio P(E|H)/P(E|~H) anything you want; the equation
a = bx + (1-b)(cx)
is guaranteed to have a solution for any a,b,c.