Likelihood ratios for updating are independent of the prior
This is kind of technically true, but not in a practical sense. As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence. The likelihood ratio for an event X is after all P(X|E):P(¬X|E) where the E refers to all the previous observations you’ve made that are now integrated in your prior.
Usually when referring to “updating on En we use the likelihood ratio
P(En|E1,E2,...,En−1):P(¬En|E1,E2,...,En−1)
which kind of makes it clear that this will depend on the order of the different Ei.
As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence.
I’d be interested to see the assumptions which go into this. As Stuart has pointed out, it’s got to do with how correlated the evidence is. And for fat-tailed distributions we probably should expect to be surprised at a constant rate.
This is kind of technically true, but not in a practical sense. As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence. The likelihood ratio for an event X is after all P(X|E):P(¬X|E) where the E refers to all the previous observations you’ve made that are now integrated in your prior.
Usually when referring to “updating on En we use the likelihood ratio
P(En|E1,E2,...,En−1):P(¬En|E1,E2,...,En−1)which kind of makes it clear that this will depend on the order of the different Ei.
I’d be interested to see the assumptions which go into this. As Stuart has pointed out, it’s got to do with how correlated the evidence is. And for fat-tailed distributions we probably should expect to be surprised at a constant rate.