Frequentist statistics were invented in a (failed) attempt to keep subjectivity out of science in a time before humanity really understood the laws of probability theory
I’m a Bayesian, but do you have a source for this claim? It was my understanding that Frequentism was mostly promoted by Ron Fisher in the 20th century, well after the work of Bayes.
Synthesised from Wikipedia:
While the first cited frequentist work (the weak law of large numbers, 1713, Jacob Bernoulli, Frequentist probability) predates Bayes’ work (edited by Price in 1763, Bayes’ Theorem), it’s not by much. Further, according to the article on “Frequentist Probability”, “[Bernoulli] is also credited with some appreciation for subjective probability (prior to and without Bayes theorem).”
The ones that pushed frequentism in order to achieve objectivity were Fisher, Neyman and Pearson. From “Frequentist probability”: “All valued objectivity, so the best interpretation of probability available to them was frequentist”. Fisher did other nasty things, such as using the fact that causality is really hard to soundly establish to argue that tobacco was not proven to cause cancer. But nothing indicates that this was done out of not understanding the laws of probability theory.
AI scientists use the Bayesian interpretation
Sometimes yes, sometimes not. Even Bayesian AI scientists use frequentist statistics pretty often.
This post makes it sound like frequentism is useless and that is not true. The concepts of: a stochastic estimator for a quantity, and looking at whether it is biased, and its variance; were developed by frequentists to look at real world data. AI scientists use it to analyse algorithms like gradient descent, or approximate Bayesian inference schemes, but the tools are definitely useful.
I’m a Bayesian, but do you have a source for this claim? It was my understanding that Frequentism was mostly promoted by Ron Fisher in the 20th century, well after the work of Bayes.
Synthesised from Wikipedia:
While the first cited frequentist work (the weak law of large numbers, 1713, Jacob Bernoulli, Frequentist probability) predates Bayes’ work (edited by Price in 1763, Bayes’ Theorem), it’s not by much. Further, according to the article on “Frequentist Probability”, “[Bernoulli] is also credited with some appreciation for subjective probability (prior to and without Bayes theorem).”
The ones that pushed frequentism in order to achieve objectivity were Fisher, Neyman and Pearson. From “Frequentist probability”: “All valued objectivity, so the best interpretation of probability available to them was frequentist”. Fisher did other nasty things, such as using the fact that causality is really hard to soundly establish to argue that tobacco was not proven to cause cancer. But nothing indicates that this was done out of not understanding the laws of probability theory.
Sometimes yes, sometimes not. Even Bayesian AI scientists use frequentist statistics pretty often.
This post makes it sound like frequentism is useless and that is not true. The concepts of: a stochastic estimator for a quantity, and looking at whether it is biased, and its variance; were developed by frequentists to look at real world data. AI scientists use it to analyse algorithms like gradient descent, or approximate Bayesian inference schemes, but the tools are definitely useful.