they consider Eliezer’s treatment of frequentism / Bayesianism as something of a strawman and that there’s no particular reason to paint them as two drastically differing camps when real statisticians are happy with using methods drawn from both.
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding.
</channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
In that case, we got very different impressions about how Eliezer described the two camps; here is what I heard: <channel righteous fury of Eliezer’s pure Bayesian soul>
It’s not Bayesian users on the one hand and Frequentists on the other, each despising the others’ methods. Rather, it’s the small group of epistemic statisticians and a large majority of instrumentalist ones.
The epistemics are the small band of AI researchers using statistical models to represent probability so as to design intelligence, learning, and autonomy. The idea is that ideal models are provably Baysian, and the task undertaken is to understand and implement close approximations of them.
The instrumentalist mainstream doesn’t always claim that it’s representing probability and doesn’t feel lost without that kind of philosophical underpinning. Instrumentalists hound whatever problem is at hand with all statistical models and variables that they can muster to get the curve or isolated variable etc. they’re looking for and think is best. The most important part of instrumentalist models is the statistician him or herself, which does the Bayesian updating adequately and without the need for understanding. </channel righteous fury of Eliezer’s pure Bayesian soul>
Saying that the division is a straw man because most statisticians use all methods misses the point.
Edit: see for example here and here.