It sounds to me like a problem of not reasoning according to Occam’s razor and “overfitting” a model to the available data.
Ceteris paribus, H’ isn’t more “fishy” than any other hypothesis, but H’ is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice’s study supports H and Bob’s contradicts it does, in fact, increase the weight given to H’ in the posterior relative to its weight in the prior; it’s just that H’ is prima facie less likely, according to Occam.
Given all the evidence, the ratio of likelihoods P(H’|E)/P(H|E)=P(E|H’)P(H’)/(P(E|H)P(H)). We know P(E|H’) > P(E|H) (and P(E|H’) > P(E|¬H)), since the results of Alice’s and Bob’s studies together are more likely given H’, but P(H’) < P(H) (and P(H’) < P(¬H)) according to the complexity prior. Whether H’ is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H’)/P(E|H) (or P(E|H’)/P(E|¬H)) is larger or smaller than P(H’)/P(H) (or P(H’)/P(¬H)).
I think it ends up feeling fishy because the people formulating H’ just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H’ as a hypothesis, they’re according it more weight than it deserves according to the complexity prior.
It sounds to me like a problem of not reasoning according to Occam’s razor and “overfitting” a model to the available data.
Ceteris paribus, H’ isn’t more “fishy” than any other hypothesis, but H’ is a significantly more complex hypothesis than H or ¬H: instead of asserting H or ¬H, it asserts (A=>H) & (B=>¬H), so it should have been commensurately de-weighted in the prior distribution according to its complexity. The fact that Alice’s study supports H and Bob’s contradicts it does, in fact, increase the weight given to H’ in the posterior relative to its weight in the prior; it’s just that H’ is prima facie less likely, according to Occam.
Given all the evidence, the ratio of likelihoods P(H’|E)/P(H|E)=P(E|H’)P(H’)/(P(E|H)P(H)). We know P(E|H’) > P(E|H) (and P(E|H’) > P(E|¬H)), since the results of Alice’s and Bob’s studies together are more likely given H’, but P(H’) < P(H) (and P(H’) < P(¬H)) according to the complexity prior. Whether H’ is more likely than H (or ¬H, respectively) is ultimately up to whether P(E|H’)/P(E|H) (or P(E|H’)/P(E|¬H)) is larger or smaller than P(H’)/P(H) (or P(H’)/P(¬H)).
I think it ends up feeling fishy because the people formulating H’ just used more features (the circumstances of the experiments) in a more complex model to account for the as-of-yet observed data after having observed said data, so it ends up seeming like in selecting H’ as a hypothesis, they’re according it more weight than it deserves according to the complexity prior.