Is there any kind of paper where reporting likelihood ratios is common? I don’t recall ever reading one, or seeing one discussed.
My current suspicion on why we don’t see more of it comes down to citations; there’s no relevant analysis for a given researcher to cite, upon which they can build or compare their likelihood ratios against.
It feels to me like a viable strategy to kickstart a conversion to likelihood ratios in a field is to fund papers which re-examine the fundamental findings in the field under a likelihood paradigm, to provide a ready basis of citeable references for when people are conducting current research. I’d really like to see something similar to the kind of analysis done here, but for different versions of the 2nd Law of Thermodynamics, for example. Unfortunately I don’t see analysis of data being touted as an area of focus for any of the “new ways to conduct research” orgs that have popped up in the last few years.
I imagine the place likelihood ratios would be reported most often are meta-analyses, though this is just a hunch and I have no actual data to back it up.
I think this could be true. Papers report p-values and mark estimated values significant at specific thresholds with asterisks because of the convention in a field, so you could probably bootstrap a new field or a new research culture to a different equilibrium from the start.
On the other hand, I believe p-values are used precisely because you only need the null hypothesis to report them. Asking researchers to think about concrete alternatives to their hypotheses is more difficult, and you can run into all kinds of consistency problems when you do Bayesian inference in an environment where the “true model” is not in the family of models that you’re considering in your paper—this is the main obstacle you face when going from likelihood ratios to posteriors. This problem is probably why Bayesian methods never really took off compared to good old p-values.
Is there any kind of paper where reporting likelihood ratios is common? I don’t recall ever reading one, or seeing one discussed.
My current suspicion on why we don’t see more of it comes down to citations; there’s no relevant analysis for a given researcher to cite, upon which they can build or compare their likelihood ratios against.
It feels to me like a viable strategy to kickstart a conversion to likelihood ratios in a field is to fund papers which re-examine the fundamental findings in the field under a likelihood paradigm, to provide a ready basis of citeable references for when people are conducting current research. I’d really like to see something similar to the kind of analysis done here, but for different versions of the 2nd Law of Thermodynamics, for example. Unfortunately I don’t see analysis of data being touted as an area of focus for any of the “new ways to conduct research” orgs that have popped up in the last few years.
I imagine the place likelihood ratios would be reported most often are meta-analyses, though this is just a hunch and I have no actual data to back it up.
I think this could be true. Papers report p-values and mark estimated values significant at specific thresholds with asterisks because of the convention in a field, so you could probably bootstrap a new field or a new research culture to a different equilibrium from the start.
On the other hand, I believe p-values are used precisely because you only need the null hypothesis to report them. Asking researchers to think about concrete alternatives to their hypotheses is more difficult, and you can run into all kinds of consistency problems when you do Bayesian inference in an environment where the “true model” is not in the family of models that you’re considering in your paper—this is the main obstacle you face when going from likelihood ratios to posteriors. This problem is probably why Bayesian methods never really took off compared to good old p-values.