They key point is that when you do the p value test you are determining p(data | null_hyp). This is certainly useful to calculate, but doesn’t tell you the whole story about whether your data support any particular non-null hypotheses.
Chapter 17 of E.T. Jaynes’ book provides a lively discussion of the limitations of traditional hypothesis testing, and is accessible enough that you can dive into it without having worked through the rest of the book.
The Cohen article cited below is nice but it’s important to note it doesn’t completely reject the use of null hypotheses or p-values:
.. null hypothesis testing complete with power analysis can be useful if we abandon the rejection of point nil hypotheses and use instead “good-enough” range null hypotheses
I think it’s funny that the observation that it’s “non-Bayesian” is being treated here as a refutation, and got voted up. Not terribly surprising though.
Could you be more explicit here? I would also have considered that if the charge of non-Bayesianness were to stick, that would be tantamount to a refutation, so if I’m making a mistake then help me out?
The charge was not that the idea is not useful, nor that it is not true, either of which might be a mark against it. But “non-Bayesian”? I can’t unpack that accusation in a way that makes it seem like a good thing to be concerned about. Even putting aside that I don’t much care for Bayesian decision-making (for humans), it sounds like it’s in the same family as a charge of “non-Christian”.
One analogy: non-mathematical, not formalized, not written in English, and attempts to translate generally fail.
See [*] for a critique of null hypothesis and related techniques from a Bayesian perspective. To cite:
My work in power analysis led be to realize that the nil hypothesis is always false. [...] If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null hypothesis is always false, what’s the big deal about rejecting it?
[*] J. Cohen (1994). `The Earth Is Round (p < .05)’. American Psychologist 49(12):997-1003. [pdf].
The idea of a null hypothesis is non-Bayesian.
A null hypothesis in Bayesian terms is a theory with a high prior probability due to minimal complexity.
I’m not sure it’s so clear cut.
They key point is that when you do the p value test you are determining p(data | null_hyp). This is certainly useful to calculate, but doesn’t tell you the whole story about whether your data support any particular non-null hypotheses.
Chapter 17 of E.T. Jaynes’ book provides a lively discussion of the limitations of traditional hypothesis testing, and is accessible enough that you can dive into it without having worked through the rest of the book.
The Cohen article cited below is nice but it’s important to note it doesn’t completely reject the use of null hypotheses or p-values:
I think it’s funny that the observation that it’s “non-Bayesian” is being treated here as a refutation, and got voted up. Not terribly surprising though.
Could you be more explicit here? I would also have considered that if the charge of non-Bayesianness were to stick, that would be tantamount to a refutation, so if I’m making a mistake then help me out?
The charge was not that the idea is not useful, nor that it is not true, either of which might be a mark against it. But “non-Bayesian”? I can’t unpack that accusation in a way that makes it seem like a good thing to be concerned about. Even putting aside that I don’t much care for Bayesian decision-making (for humans), it sounds like it’s in the same family as a charge of “non-Christian”.
One analogy: non-mathematical, not formalized, not written in English, and attempts to translate generally fail.
See [*] for a critique of null hypothesis and related techniques from a Bayesian perspective. To cite:
[*] J. Cohen (1994). `The Earth Is Round (p < .05)’. American Psychologist 49(12):997-1003. [pdf].
Being non-Bayesian is one particular type of being untrue.
Now, what does this mean? Sounds horribly untrue.