Yet you didn’t respond to his statement of the Bayesian alternative, namely, reporting likelihoods. Reporting likelihoods addresses all of your complaints (because it doesn’t rely on a prior at all). You can use arbitrary likelihood-ratio cutoffs in essentially the same way that you’d use arbitrary p-value cutoffs.
Some advantages of likelihoods over p-values:
You are encouraged to explicitly contrast hypotheses against each other, rather than pretending that there’s a privileged “null hypothesis” to contrast against. This somewhat helps avoid the failure mode of rejecting a fake null hypothesis that no one actually believed, and calling that a significant result.
If you do have a prior, it’s super easy to update on likelihoods (or even better, likelihood ratios).
p-values are almost likelihoods anyway, they just add the weird “x or greater” trick, which makes it harder to translate into likelihood ratios.
In other words: why mess up the nice elegant math of likelihoods with the weird alterations for p-values? Since likelihoods meet all the criteria you’ve stated in your post, and more besides, there should be some additional motivation for using p-values instead; some advantage over likelihoods which is worth the cost.
I’m pretty sure I’ve missed something, given that the number of papers giving yet-another-argument-against-p-values is approximately infinite, but that’s what I can come up with.
Yet you didn’t respond to his statement of the Bayesian alternative, namely, reporting likelihoods. Reporting likelihoods addresses all of your complaints (because it doesn’t rely on a prior at all). You can use arbitrary likelihood-ratio cutoffs in essentially the same way that you’d use arbitrary p-value cutoffs.
Some advantages of likelihoods over p-values:
You are encouraged to explicitly contrast hypotheses against each other, rather than pretending that there’s a privileged “null hypothesis” to contrast against. This somewhat helps avoid the failure mode of rejecting a fake null hypothesis that no one actually believed, and calling that a significant result.
If you do have a prior, it’s super easy to update on likelihoods (or even better, likelihood ratios).
p-values are almost likelihoods anyway, they just add the weird “x or greater” trick, which makes it harder to translate into likelihood ratios.
In other words: why mess up the nice elegant math of likelihoods with the weird alterations for p-values? Since likelihoods meet all the criteria you’ve stated in your post, and more besides, there should be some additional motivation for using p-values instead; some advantage over likelihoods which is worth the cost.
I’m pretty sure I’ve missed something, given that the number of papers giving yet-another-argument-against-p-values is approximately infinite, but that’s what I can come up with.