I find that the foundation of probability theory is still done using Kolmogorov’s frequentist axiomization in terms of measure theory for spaces of total measure 1.0, while Bayesian statistics still justifies itself in terms of Cox’s Theorem
Can you expand on that? The connection between Kolmogorov’s and Cox’s foundations and frequentist vs. Bayesian interpretations is not clear to me. The only mathematical difference is that Cox’s axioms don’t give you countable additivity, but that doesn’t seem to be a frequentist vs. Bayesian point of dispute.
The connection between Kolmogorov’s and Cox’s foundations and frequentist vs. Bayesian interpretations is not clear to me.
Kolmogorov’s axiomatization treats events as elements of a powerset rather than as propositions. Cox’s axioms give you belief-allocation among any finite number of quantifier-free propositions, which then lets you derive countable additivity quickly from there.
Whereas I kinda think that distributions and degrees of belief ought to be themselves foundational, degrading to classical logic in the limit as we approach certainty. This is the general approach taken in the probabilistic programming community.
Can you expand on that? The connection between Kolmogorov’s and Cox’s foundations and frequentist vs. Bayesian interpretations is not clear to me. The only mathematical difference is that Cox’s axioms don’t give you countable additivity, but that doesn’t seem to be a frequentist vs. Bayesian point of dispute.
Kolmogorov’s axiomatization treats events as elements of a powerset rather than as propositions. Cox’s axioms give you belief-allocation among any finite number of quantifier-free propositions, which then lets you derive countable additivity quickly from there.
Whereas I kinda think that distributions and degrees of belief ought to be themselves foundational, degrading to classical logic in the limit as we approach certainty. This is the general approach taken in the probabilistic programming community.