Barkley Rosser, I think the difference between objective Bayesians and subjective Bayesians has more to do with how they treat prior distributions than how they view asymptotic convergence.
I’m personally not an objective Bayesian by your definition—I don’t think there are stable, true probability distributions “out there.” Nevertheless, I do find the asymptotic convergence theorems meaningful. In my view, asymptotic convergence gets you to the most informative distribution conditional on some state of information, but that state of information need not be maximal, i.e., deterministic for observables. When the best you can do is some frequency distribution over observables, it’s because you’ve left out essential details, not because the observables are random.
Barkley Rosser, I think the difference between objective Bayesians and subjective Bayesians has more to do with how they treat prior distributions than how they view asymptotic convergence.
I’m personally not an objective Bayesian by your definition—I don’t think there are stable, true probability distributions “out there.” Nevertheless, I do find the asymptotic convergence theorems meaningful. In my view, asymptotic convergence gets you to the most informative distribution conditional on some state of information, but that state of information need not be maximal, i.e., deterministic for observables. When the best you can do is some frequency distribution over observables, it’s because you’ve left out essential details, not because the observables are random.