If we look at Bayes’ theorem (that picture above, with P(A|B) pronounced “probability of A if we learn B”), our probability of A after getting evidence B is equal to P(A) before you saw the evidence (the “prior probability”), times a factor P(B|A)/P(B).
This factor is called the “likelihood ratio,” and it tells you how much impact the evidence should have on your probability—what it means is that the more unexpected the evidence would be if A wasn’t true, the more the evidence supports A. Like how UFO abduction stories aren’t very convincing, because we’d expect them to happen even if there weren’t any aliens (so P(B|A)/P(B) is close to 1, so multiplying by that factor doesn’t change our belief).
Anyhow, because Bayes’ theorem can be split up into parts like this, research papers don’t have to rely on priors! Each paper could just gather some evidence, and then report the likelihood ratio—P(evidence | hypothesis)/P(evidence). Then people with different priors would just multiply their prior, P(A), by the likelihood ratio, and that would be Bayes’ theorem, so they would each get P(A|B). And if you want to gather evidence from multiple papers, you can just multiply them together.
Although, that’s only in a fairy-tale world with e.g. no file-drawer effect. In reality, more care would be necessary—the point is just that differing priors don’t halt science.
Anyhow, because Bayes’ theorem can be split up into parts like this, research papers don’t have to rely on priors! Each paper could just gather some evidence, and then report the likelihood ratio—P(evidence | hypothesis)/P(evidence).
Fair enough. Can I take your point to be “when things get super complicated, sometimes you can make conceptual progress only by not worrying about keeping track of everything?” The only trouble is that once you stop keeping track of probability/significance, it becomes difficult to pick it up again in the future—you’d need to gather additional evidence in a better-understood way to check what’s going on. Actually, that’s a good analogy for hypothesis generation, with the “difficult to keep track of” stuff becoming the problem of uncertain priors.
My point is more like: If scientific interest only rests on some limited aspect of the problem, you can’t avoid priors by, e.g., simpy reporting likelihood ratios. Likelihood ratios summarize information about the entire problem, including the auxiliary, scientifically uninteresting aspects. The Bayesian way of making statements free of the auxiliary aspects (marginalization) requires, at the very least, a prior over those aspects.
I’m not sure if I agree or disagree with the third sentence on down because I don’t understand what you’ve written.
Why this isn’t necessarily true:
If we look at Bayes’ theorem (that picture above, with P(A|B) pronounced “probability of A if we learn B”), our probability of A after getting evidence B is equal to P(A) before you saw the evidence (the “prior probability”), times a factor P(B|A)/P(B).
This factor is called the “likelihood ratio,” and it tells you how much impact the evidence should have on your probability—what it means is that the more unexpected the evidence would be if A wasn’t true, the more the evidence supports A. Like how UFO abduction stories aren’t very convincing, because we’d expect them to happen even if there weren’t any aliens (so P(B|A)/P(B) is close to 1, so multiplying by that factor doesn’t change our belief).
Anyhow, because Bayes’ theorem can be split up into parts like this, research papers don’t have to rely on priors! Each paper could just gather some evidence, and then report the likelihood ratio—P(evidence | hypothesis)/P(evidence). Then people with different priors would just multiply their prior, P(A), by the likelihood ratio, and that would be Bayes’ theorem, so they would each get P(A|B). And if you want to gather evidence from multiple papers, you can just multiply them together.
Although, that’s only in a fairy-tale world with e.g. no file-drawer effect. In reality, more care would be necessary—the point is just that differing priors don’t halt science.
That’s not true in general.
Fair enough. Can I take your point to be “when things get super complicated, sometimes you can make conceptual progress only by not worrying about keeping track of everything?” The only trouble is that once you stop keeping track of probability/significance, it becomes difficult to pick it up again in the future—you’d need to gather additional evidence in a better-understood way to check what’s going on. Actually, that’s a good analogy for hypothesis generation, with the “difficult to keep track of” stuff becoming the problem of uncertain priors.
My point is more like: If scientific interest only rests on some limited aspect of the problem, you can’t avoid priors by, e.g., simpy reporting likelihood ratios. Likelihood ratios summarize information about the entire problem, including the auxiliary, scientifically uninteresting aspects. The Bayesian way of making statements free of the auxiliary aspects (marginalization) requires, at the very least, a prior over those aspects.
I’m not sure if I agree or disagree with the third sentence on down because I don’t understand what you’ve written.