I’m not entirely sure I understand your point. The example you’re citing is more the guy saying “I believe X, and X implies ~Y, therefore ~Y”, so Eliezer is saying “So Y implies ~X then?”
But the “X implies ~Y” belief can happen when one has low belief in X or high belief in X.
Or are you saying “the likelihoods assigned led to past interpretation of analogous (lack of) evidence, and that’s why the current prior is what it is?
komponisto nailed the intuition I was going from: the likelihood ratio is independent of the prior, but an unswayable Bayesian fixes P(E), forcing extreme priors to have extreme likelihood ratios.
*blinks* I think I’m extra confused. The law of conservation of probability is basically just saying that the change in belief may be large or small, so evidence may be strong or weak in that sense. But that doesn’t leave the likelihoods up for grabs, (well, okay, P(E|~H) could depend on how you distribute your belief over the space of hypotheses other than H, but… I’m not sure that was your point)
Okay, point conceded … that still doesn’t generate a result that matches the intuition I had. I need to spend more time on this to figure out what assumptions I’m relying on to claim that “extremely wrong beliefs force quick updates”.
I’m not entirely sure I understand your point. The example you’re citing is more the guy saying “I believe X, and X implies ~Y, therefore ~Y”, so Eliezer is saying “So Y implies ~X then?”
But the “X implies ~Y” belief can happen when one has low belief in X or high belief in X.
Or are you saying “the likelihoods assigned led to past interpretation of analogous (lack of) evidence, and that’s why the current prior is what it is?
komponisto nailed the intuition I was going from: the likelihood ratio is independent of the prior, but an unswayable Bayesian fixes P(E), forcing extreme priors to have extreme likelihood ratios.
*blinks* I think I’m extra confused. The law of conservation of probability is basically just saying that the change in belief may be large or small, so evidence may be strong or weak in that sense. But that doesn’t leave the likelihoods up for grabs, (well, okay, P(E|~H) could depend on how you distribute your belief over the space of hypotheses other than H, but… I’m not sure that was your point)
Okay, point conceded … that still doesn’t generate a result that matches the intuition I had. I need to spend more time on this to figure out what assumptions I’m relying on to claim that “extremely wrong beliefs force quick updates”.
Remember, though, that even fixing both P(E) and P(H), you can still make the ratio P(E|H)/P(E|~H) anything you want; the equation
a = bx + (1-b)(cx)
is guaranteed to have a solution for any a,b,c.