As I see it, probability is essentially just a measure of our ignorance, or the ignorance of any model that’s used to make predictions. An event with a probability of 0.5 implies that in half of all situations where I have information indistinguishable from the information I have now, this event will occur; in the other half of all such indistinguishable situations, it won’t happen.
Here I think you’re mixing two different approaches. One is the Bayesian apporach: it comes down to saying probabilistic theories are normative. The question is how to reconcile that with how these theories make some predictions that don’t look normative at all: for example, saying that blackbody radiation flux scales with the fourth power of temperature seems like a concrete prediction that doesn’t have much to do with the ignorance of any particular observer. QM is even more troublesome but you don’t need to go there to begin to see some puzzles.
The second is to say that in some circumstances you’ll get a unique probability measure on an event space by requiring that the measure is invariant under the action of some symmetry group on the space. I think this is a useful meta-principle for choosing probability measures (for example, unitary symmetry of QM → Born rule), and it can get you somewhere if you combine it with Dutch book style arguments, but in practice I give probabilities on lots of events which don’t seem to have this kind of nice symmetry that die rolls or coin flips have, and I think what I’m doing there is a reasonable thing to do. I just don’t know how to explain what I’m doing or how to justify it properly.
Similarly, temperature also measures our ignorance, or rather lack of control, of the trajectories of a large number of particles… To a God-level being that actually does track the universal wave function and knows (and has the ability to control) the trajectories of every particle everywhere, there is no such thing as temperature, no such thing as probability.
The problem here is that there are plenty of physical phenomena which are probably best understood in terms of temperature even if you’re God. Phase transitions are one example of that: it’s unlikely that the “good understanding” of the superconducting phase transition doesn’t involve mentioning temperature/statistical mechanics at all, for example.
Thus, probability, temperature, etc. become necessary tools for predicting and controlling reality at the level of rational agents embedded in the physical universe, with all the ignorance and impotence that comes along with it.
I agree with this in general, but we use probability in many different senses, some of them not really connected to this problem of uncertainty. I’ve given some examples already in the comment, and you can even produce ones from mathematics: for example, plenty of analytic number theory can be summed up as trying to understand in what sense the Liouville function is random (i.e. you can model it as a “coin flip”) and how to prove that it is so.
I think none of this unfortunately answers the question of what the “epistemic status” of a probabilistic theory actually is.
Here I think you’re mixing two different approaches. One is the Bayesian apporach: it comes down to saying probabilistic theories are normative. The question is how to reconcile that with how these theories make some predictions that don’t look normative at all: for example, saying that blackbody radiation flux scales with the fourth power of temperature seems like a concrete prediction that doesn’t have much to do with the ignorance of any particular observer. QM is even more troublesome but you don’t need to go there to begin to see some puzzles.
The second is to say that in some circumstances you’ll get a unique probability measure on an event space by requiring that the measure is invariant under the action of some symmetry group on the space. I think this is a useful meta-principle for choosing probability measures (for example, unitary symmetry of QM → Born rule), and it can get you somewhere if you combine it with Dutch book style arguments, but in practice I give probabilities on lots of events which don’t seem to have this kind of nice symmetry that die rolls or coin flips have, and I think what I’m doing there is a reasonable thing to do. I just don’t know how to explain what I’m doing or how to justify it properly.
The problem here is that there are plenty of physical phenomena which are probably best understood in terms of temperature even if you’re God. Phase transitions are one example of that: it’s unlikely that the “good understanding” of the superconducting phase transition doesn’t involve mentioning temperature/statistical mechanics at all, for example.
I agree with this in general, but we use probability in many different senses, some of them not really connected to this problem of uncertainty. I’ve given some examples already in the comment, and you can even produce ones from mathematics: for example, plenty of analytic number theory can be summed up as trying to understand in what sense the Liouville function is random (i.e. you can model it as a “coin flip”) and how to prove that it is so.
I think none of this unfortunately answers the question of what the “epistemic status” of a probabilistic theory actually is.