If there is some evidence E that the assertion A can’t explain, then the likelihood P(E|A) will be tiny. Thus, the numerator P(E|A)P(A) will also be tiny, and likewise the posterior probability P(A|E). Updating on the near impossibility of evidence E has driven the probability of the assertion A
This isn’t quite right. The tiny probability of an observation given the hypothesis does not imply that the posterior of the hypothesis will be low. Suppose there’s a lottery with 10 million tickets. We have very good reasons to believe the lottery is fair. Still, whoever the winner X is, P(X is the winner|The lottery is fair) = 1/10000000. The reason P(The lottery is fair|X is the winner) is not low is that the alternative hypothesis “The lottery is not fair” also does a poor job at predicting the result (why rigged in favor of X specifically and not the other 9999999 people?) and the prior on P(The lottery is not fair) is very low. Ok, but what about the hypothesis “The lottery is 100% rigged in favor of X”? The probability that X is the winner given this alternative is 1. But the prior on that hypothesis is basically zero, so it doesn’t matter. (Things are different if we have reasons to think X is suspicious. Then the fact that X won is a good reason to suspect the lottery isn’t fair.)
tl;dr: The posterior P(H1|E) is tiny iff P(H1)P(E|H1) is tiny relative to all other P(Hi)P(E|Hi).
I agree with you here. I made a mistake but on the bright side, I learnt a lot about the generalised form of Bayes’ theorem which applies to all possible hypotheses. This was also how Eliezer explained this relationship between the posterior and the numerator in Decoherence is Falsifiable and Testable. I was trying to simplify the relationship between Bayes’ theorem and Deutsch’s criterion for good explanations for the sake of the post but I oversimplified too much.
I still think that Bayes’ theorem and Deutsch’s criterion for good explanation are compatible and in a practical sense, one can be explained in terms of the other but, using the generalised form of Bayes is necessary.
I updated my post to explain that this part is slightly incorrect.
It seems that he makes the same mistake in that post (though he makes it clear in the rest of the essay that alternatives matter). You paraphrased him right.
Incidentally, Popper also thought that you couldn’t falsify a theory unless we have a non-ad hoc alternative that explains the data better.
This isn’t quite right. The tiny probability of an observation given the hypothesis does not imply that the posterior of the hypothesis will be low. Suppose there’s a lottery with 10 million tickets. We have very good reasons to believe the lottery is fair. Still, whoever the winner X is, P(X is the winner|The lottery is fair) = 1/10000000. The reason P(The lottery is fair|X is the winner) is not low is that the alternative hypothesis “The lottery is not fair” also does a poor job at predicting the result (why rigged in favor of X specifically and not the other 9999999 people?) and the prior on P(The lottery is not fair) is very low. Ok, but what about the hypothesis “The lottery is 100% rigged in favor of X”? The probability that X is the winner given this alternative is 1. But the prior on that hypothesis is basically zero, so it doesn’t matter. (Things are different if we have reasons to think X is suspicious. Then the fact that X won is a good reason to suspect the lottery isn’t fair.)
tl;dr: The posterior P(H1|E) is tiny iff P(H1)P(E|H1) is tiny relative to all other P(Hi)P(E|Hi).
I agree with you here. I made a mistake but on the bright side, I learnt a lot about the generalised form of Bayes’ theorem which applies to all possible hypotheses. This was also how Eliezer explained this relationship between the posterior and the numerator in Decoherence is Falsifiable and Testable. I was trying to simplify the relationship between Bayes’ theorem and Deutsch’s criterion for good explanations for the sake of the post but I oversimplified too much.
I still think that Bayes’ theorem and Deutsch’s criterion for good explanation are compatible and in a practical sense, one can be explained in terms of the other but, using the generalised form of Bayes is necessary.
I updated my post to explain that this part is slightly incorrect.
It seems that he makes the same mistake in that post (though he makes it clear in the rest of the essay that alternatives matter). You paraphrased him right.
Incidentally, Popper also thought that you couldn’t falsify a theory unless we have a non-ad hoc alternative that explains the data better.
This is so interesting. Do you know where I can read more about this? Conjectures and Refutations?