(which would require us to know P(H), P(E|H), and P(E|~H))
Is that not precisely the problem? Often, the H you are interested in is so vague (“there is some kind of effect in a certain direction”) that it is very difficult to estimate P(E / H) - or even to define it.
OTOH, P(E / ~H) is often very easy to compute from first principles, or to obtain through experiments (since conditions where “the effect” is not present are usually the most common).
Example: I have a coin. I want to know if it is “true” or “biased”. I flip it 100 times, and get 78 tails.Now how do I estimate the probability of obtaining this many tails, knowing that the coin is “biased”? How do I even express that analytically? By contrast, it is very easy to compute the probability of this sequence (or any other) with a “non-biased” coin.
So there you have it. The whole concept of “null hypotheses” is not a logical axiom, it simply derives from real-world observation: in the real world, for most of the H we are interested in, estimating P(E / ~H) is easy, and estimating P(E / H) is either hard or impossible.
what about P(E|H)?? (Not to mention P(H).)
P(H) is silently set to .5. If you know P(E / ~H), this makes P(E / H) unnecessary to compute the real quantity of interest, P(H / E) / P(~H / E). I think.
There needs to be a post specifically devoted to arguments of the form “It’s okay to do things wrong, because doing them right would be hard”. I’ve seen this so many times, in so many places, in so many subjects, that I have to conclude that people just don’t see what is wrong with it.
(No, I’m not talking about making simplifying assumptions or idealizations in models. More like presenting a collection of sometimes-useful ad-hoc tricks as a competing theory, which is then argued for as a theory against its competitors on the basis of its being “easier to apply”.)
Bayes’ Theorem says that P(H|E) = P(H)P(E|H)/P(E). That’s, like, the law. You don’t get to take P(E|H) out of the equation, or pretend it isn’t there, just because it’s difficult to estimate. As I’ve said elsewhere, if you have a belief, then you’ve done a Bayesian update—which means you have some assumption about each of those quantities appearing in the formula, whether you choose to confront these assumptions or not.
As a matter of fact, if you find P(E|H) overly difficult to estimate, that means your H isn’t paying its rent.
Is that not precisely the problem? Often, the H you are interested in is so vague (“there is some kind of effect in a certain direction”) that it is very difficult to estimate P(E / H) - or even to define it.
OTOH, P(E / ~H) is often very easy to compute from first principles, or to obtain through experiments (since conditions where “the effect” is not present are usually the most common).
Example: I have a coin. I want to know if it is “true” or “biased”. I flip it 100 times, and get 78 tails.Now how do I estimate the probability of obtaining this many tails, knowing that the coin is “biased”? How do I even express that analytically? By contrast, it is very easy to compute the probability of this sequence (or any other) with a “non-biased” coin.
So there you have it. The whole concept of “null hypotheses” is not a logical axiom, it simply derives from real-world observation: in the real world, for most of the H we are interested in, estimating P(E / ~H) is easy, and estimating P(E / H) is either hard or impossible.
P(H) is silently set to .5. If you know P(E / ~H), this makes P(E / H) unnecessary to compute the real quantity of interest, P(H / E) / P(~H / E). I think.
There needs to be a post specifically devoted to arguments of the form “It’s okay to do things wrong, because doing them right would be hard”. I’ve seen this so many times, in so many places, in so many subjects, that I have to conclude that people just don’t see what is wrong with it.
(No, I’m not talking about making simplifying assumptions or idealizations in models. More like presenting a collection of sometimes-useful ad-hoc tricks as a competing theory, which is then argued for as a theory against its competitors on the basis of its being “easier to apply”.)
Bayes’ Theorem says that P(H|E) = P(H)P(E|H)/P(E). That’s, like, the law. You don’t get to take P(E|H) out of the equation, or pretend it isn’t there, just because it’s difficult to estimate. As I’ve said elsewhere, if you have a belief, then you’ve done a Bayesian update—which means you have some assumption about each of those quantities appearing in the formula, whether you choose to confront these assumptions or not.
As a matter of fact, if you find P(E|H) overly difficult to estimate, that means your H isn’t paying its rent.