By a coincidence of dubious humor, I recently read a paper on exactly this topic, how NHST is completely misunderstood and employed wrongly and what can be improved! I was only reading it for a funny & insightful quote, but Jacob Cohen (as in, ‘Cohen’s d’) in pg 5-6 of “The Earth Is Round (p < 0.05)” tells us that we shouldn’t seek to replace NHST with a “magic alternative” because “it doesn’t exist”. What we should do is focus on understanding the data with graphics and datamining techniques; report confidence limits on effect sizes, which gives us various things I haven’t looked up; and finally, place way more emphasis on replication than we currently do.
An admirable program; we don’t have to shift all the way to Bayesian reasoning to improve matters. Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.
I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.
Any example where there are more than two potential hypotheses.
Note, that for example, “this coin is unbiased”, “this coin is biased toward heads with p=.61″, and “this coin is biased toward heads with p=.62” count as three different hypotheses for this purpose.
This is fair as a criticism of log-odds, but in the example you give, one could avoid the issue of people having varying priors by just reporting the value of the likelihood function. However, this likelihood function reporting idea fails to be a practical summary in the context of massive models with lots of nuisance parameters.
Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.
I didn’t have any specific examples in mind. But more generally, posteriors are a function of both priors and likelihoods. So even if one avoids using priors entirely by reporting only likelihoods (or some function of the likelihoods, like the log of the likelihood ratio), the resulting implied inferences can change if one’s likelihoods change, which can happen by calculating likelihoods with a different model.
By a coincidence of dubious humor, I recently read a paper on exactly this topic, how NHST is completely misunderstood and employed wrongly and what can be improved! I was only reading it for a funny & insightful quote, but Jacob Cohen (as in, ‘Cohen’s d’) in pg 5-6 of “The Earth Is Round (p < 0.05)” tells us that we shouldn’t seek to replace NHST with a “magic alternative” because “it doesn’t exist”. What we should do is focus on understanding the data with graphics and datamining techniques; report confidence limits on effect sizes, which gives us various things I haven’t looked up; and finally, place way more emphasis on replication than we currently do.
An admirable program; we don’t have to shift all the way to Bayesian reasoning to improve matters. Incidentally, what Bayesian inferences are you talking about? I thought the usual proposals/methods involved principally reporting log odds, to avoid exactly the issue of people having varying priors and updating on trials to get varying posteriors.
This only works in extremely simple cases.
Could you give an example of an experiment that would be too complex for log odds to be useful?
Any example where there are more than two potential hypotheses.
Note, that for example, “this coin is unbiased”, “this coin is biased toward heads with p=.61″, and “this coin is biased toward heads with p=.62” count as three different hypotheses for this purpose.
This is fair as a criticism of log-odds, but in the example you give, one could avoid the issue of people having varying priors by just reporting the value of the likelihood function. However, this likelihood function reporting idea fails to be a practical summary in the context of massive models with lots of nuisance parameters.
I didn’t have any specific examples in mind. But more generally, posteriors are a function of both priors and likelihoods. So even if one avoids using priors entirely by reporting only likelihoods (or some function of the likelihoods, like the log of the likelihood ratio), the resulting implied inferences can change if one’s likelihoods change, which can happen by calculating likelihoods with a different model.