I think my real gripe is that I see this massive impact of frequentism on the scientific method as promoting the use of p-values and confidence intervals, which, IMO, are using conditional probabilities in the wrong direction (one way to tell this: ask any normal scientist what a p-value or a confidence interval is, and there’s a high chance that they’ll give an explanation of what the Bayesian equivalent would be).
I’m a little surprised this didn’t come up earlier. As I mentioned to Adrià, I think the thing Bayesianism is about is more “how to think about epistemology” (where complaints like “but not everything is a probability distribution! How do you account for conjectures?” live) and the fact that the main frequentist tool used in science is totally misused and misunderstood seems to me like it’s a pretty good argument in favor of “you should be thinking like a Bayesian.”
Like, if the thing with frequentism is “yeah just use methods in a pragmatic way and don’t think about it that hard” it’s not really a surprise that people didn’t think about things that hard and this leads to widespread confusion and mistakes.
the thing with frequentism is ” yeah just use methods in a pragmatic way and don’t think about it that hard”
I think this does not accurately represent my beliefs. It is about thinking hard about how the methods actually behave, as opposed to having a theory that prescribes how methods should behave and then constructing algorithms based on that.
Frequentists analyze the properties of an algorithm that takes data as input (in their jargon, an ‘estimator’).
They also try to construct better algorithms, but each new algorithm is bespoke and requires original thinking, as opposed to Bayes which says “you should compute the posterior probability”, which makes it very easy to construct algorithms. (This is a drawback of the frequentist approach—algorithm construction is not automatic. But the finite-computation Bayesian algorithms have very few guarantees anyways so I don’t think we should count it against them too much).
I think having rando social scientists using likelihood ratios would also lead to mistakes and such.
I’m a little surprised this didn’t come up earlier. As I mentioned to Adrià, I think the thing Bayesianism is about is more “how to think about epistemology” (where complaints like “but not everything is a probability distribution! How do you account for conjectures?” live) and the fact that the main frequentist tool used in science is totally misused and misunderstood seems to me like it’s a pretty good argument in favor of “you should be thinking like a Bayesian.”
Like, if the thing with frequentism is “yeah just use methods in a pragmatic way and don’t think about it that hard” it’s not really a surprise that people didn’t think about things that hard and this leads to widespread confusion and mistakes.
I think this does not accurately represent my beliefs. It is about thinking hard about how the methods actually behave, as opposed to having a theory that prescribes how methods should behave and then constructing algorithms based on that.
Frequentists analyze the properties of an algorithm that takes data as input (in their jargon, an ‘estimator’).
They also try to construct better algorithms, but each new algorithm is bespoke and requires original thinking, as opposed to Bayes which says “you should compute the posterior probability”, which makes it very easy to construct algorithms. (This is a drawback of the frequentist approach—algorithm construction is not automatic. But the finite-computation Bayesian algorithms have very few guarantees anyways so I don’t think we should count it against them too much).
I think having rando social scientists using likelihood ratios would also lead to mistakes and such.