Hmm, interesting. I will go and learn more deeply what de Finetti was getting at. It is a little confusing… in this simple case ok fine p can be defined in a straightforward way in terms of the predictive distribution, but in more complicated cases this quickly becomes extremely difficult or impossible. For one thing, a single model with a single set of parameters may describe outcomes of vastly different experiments. E.g. consider Newtonian gravity. Ok fine strictly the Newtonian gravity part of the model has to be coupled to various other models to describe specific details of the setup, but in all cases there is a parameter G for the universal gravitation constant. G impacts on the predictive distributions for all such experiments, so it is pretty hard to see how it could be defined in terms of them, at least in a concrete sense.
I’d guess that in Geisser-style predictive inference, the meaning or reality or what-have-you of G is to be found in the way it encodes the dependence (or maybe, compresses the description) of the joint multivariate predictive distribution. But like I say, that’s not my school of thought—I’m happy to admit the possibility of physical model parameters—so I really am just guessing.
Hmm, do you know of any good material to learn more about this? I am actually extremely sympathetic to any attempt to rid model parameters of physical meaning; I mean in an abstract sense I am happy to have degrees of belief about them, but in a prior-elucidation sense I find it extremely difficult to argue about what it is sensible to believe a-priori about parameters, particularly given parameterisation dependence problems.
I am a particle physicist, and a particular problem I have is that parameters in particle physics are not constant; they vary with renormalisation scale (roughly, energy of the scattering process), so that if I want to argue about what it is a-priori reasonable to believe about (say) the mass of the Higgs boson, it matters a very great deal what energy scale I choose to define my prior for the parameters at. If I choose (naively) a flat prior over low-energy values for the Higgs mass, it implies I believe some really special and weird things about the high-scale Higgs mass parameter values (they have to be fine-tuned to the bejesus); while if I believe something more “flat” about the high scale parameters, it in turn implies something extremely informative about the low-scale values, namely that the Higgs mass should be really heavy (in the Standard Model—this is essentially the Hierarchy problem, translated into Bayesian words).
Anyway, if I can more directly reason about the physically observable things and detach from the abstract parameters, it might help clarify how one should think about this mess...
I can pass along a recommendation I have received: Operational Subjective Statistical Methods by Frank Lad. I haven’t read the book myself, so I can’t actually vouch for it, but it was described to me as “excellent”. I don’t know if it is actively prediction-centered, but it should at least be compatible with that philosophy.
Thanks, this seems interesting. It is pretty radical; he is very insistent on the idea that for all ‘quantities’ about which we want to reason there must some operational procedure we can follow in order to find out what it is. I don’t know what this means for the ontological status of physical principles, models, etc, but I can at least see the naive appeal… it makes it hard to understand why a model could ever have the power to predict new things we have never seen before though, like Higgs bosons...
Hmm, interesting. I will go and learn more deeply what de Finetti was getting at. It is a little confusing… in this simple case ok fine p can be defined in a straightforward way in terms of the predictive distribution, but in more complicated cases this quickly becomes extremely difficult or impossible. For one thing, a single model with a single set of parameters may describe outcomes of vastly different experiments. E.g. consider Newtonian gravity. Ok fine strictly the Newtonian gravity part of the model has to be coupled to various other models to describe specific details of the setup, but in all cases there is a parameter G for the universal gravitation constant. G impacts on the predictive distributions for all such experiments, so it is pretty hard to see how it could be defined in terms of them, at least in a concrete sense.
I’d guess that in Geisser-style predictive inference, the meaning or reality or what-have-you of G is to be found in the way it encodes the dependence (or maybe, compresses the description) of the joint multivariate predictive distribution. But like I say, that’s not my school of thought—I’m happy to admit the possibility of physical model parameters—so I really am just guessing.
Hmm, do you know of any good material to learn more about this? I am actually extremely sympathetic to any attempt to rid model parameters of physical meaning; I mean in an abstract sense I am happy to have degrees of belief about them, but in a prior-elucidation sense I find it extremely difficult to argue about what it is sensible to believe a-priori about parameters, particularly given parameterisation dependence problems.
I am a particle physicist, and a particular problem I have is that parameters in particle physics are not constant; they vary with renormalisation scale (roughly, energy of the scattering process), so that if I want to argue about what it is a-priori reasonable to believe about (say) the mass of the Higgs boson, it matters a very great deal what energy scale I choose to define my prior for the parameters at. If I choose (naively) a flat prior over low-energy values for the Higgs mass, it implies I believe some really special and weird things about the high-scale Higgs mass parameter values (they have to be fine-tuned to the bejesus); while if I believe something more “flat” about the high scale parameters, it in turn implies something extremely informative about the low-scale values, namely that the Higgs mass should be really heavy (in the Standard Model—this is essentially the Hierarchy problem, translated into Bayesian words).
Anyway, if I can more directly reason about the physically observable things and detach from the abstract parameters, it might help clarify how one should think about this mess...
I can pass along a recommendation I have received: Operational Subjective Statistical Methods by Frank Lad. I haven’t read the book myself, so I can’t actually vouch for it, but it was described to me as “excellent”. I don’t know if it is actively prediction-centered, but it should at least be compatible with that philosophy.
Thanks, this seems interesting. It is pretty radical; he is very insistent on the idea that for all ‘quantities’ about which we want to reason there must some operational procedure we can follow in order to find out what it is. I don’t know what this means for the ontological status of physical principles, models, etc, but I can at least see the naive appeal… it makes it hard to understand why a model could ever have the power to predict new things we have never seen before though, like Higgs bosons...