I imagine the place likelihood ratios would be reported most often are meta-analyses, though this is just a hunch and I have no actual data to back it up.
I think this could be true. Papers report p-values and mark estimated values significant at specific thresholds with asterisks because of the convention in a field, so you could probably bootstrap a new field or a new research culture to a different equilibrium from the start.
On the other hand, I believe p-values are used precisely because you only need the null hypothesis to report them. Asking researchers to think about concrete alternatives to their hypotheses is more difficult, and you can run into all kinds of consistency problems when you do Bayesian inference in an environment where the “true model” is not in the family of models that you’re considering in your paper—this is the main obstacle you face when going from likelihood ratios to posteriors. This problem is probably why Bayesian methods never really took off compared to good old p-values.
I imagine the place likelihood ratios would be reported most often are meta-analyses, though this is just a hunch and I have no actual data to back it up.
I think this could be true. Papers report p-values and mark estimated values significant at specific thresholds with asterisks because of the convention in a field, so you could probably bootstrap a new field or a new research culture to a different equilibrium from the start.
On the other hand, I believe p-values are used precisely because you only need the null hypothesis to report them. Asking researchers to think about concrete alternatives to their hypotheses is more difficult, and you can run into all kinds of consistency problems when you do Bayesian inference in an environment where the “true model” is not in the family of models that you’re considering in your paper—this is the main obstacle you face when going from likelihood ratios to posteriors. This problem is probably why Bayesian methods never really took off compared to good old p-values.