shminux: I don’t know any way, even in principle, to prove that uncertainty is Knightian. (How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?) Though even here, there’s an interesting caveat. Namely, I also would have thought as a teenager that there could be no way, even in principle, to “prove” something is “truly probabilistic,” rather than deterministic but with complicated hidden parameters. But that was before I learned the Bell/CHSH theorem, which does pretty much exactly that (if you grant some mild locality assumptions)! So it’s at least logically possible that some future physical theory could demand Knightian uncertainty in order to make internal sense, in much the same way that quantum mechanics demands probabilistic uncertainty.
But setting aside that speculative possibility, there’s a much more important point in practice: namely, it’s much easier to rule out that a given source of uncertainty is Knightian, or at least to place upper bounds on how much Knightian uncertainty it can have. To do so, you “merely” have to give a model for the system so detailed that, by using it, you can:
(1) calculate the probability of any event you want, to any desired accuracy,
(2) demonstrate, using repeated tests, that your probabilities are well-calibrated (e.g., of the things you say will happen roughly 60% of the time, roughly 60% of them indeed happen, and moreover the subset of those things that happen passes all the standard statistical tests for not having any further structure), and
(3) crucially, provide evidence that your probabilities don’t merely reflect epistemic ignorance. In practice, this would almost certainly mean providing the causal pathways by which the probabilities can be traced down to the quantum level.
Admittedly, (1)-(3) sound like a tall order! But I’d say that they’ve already been done, more or less, for all sorts of complicated multi-particle quantum systems (in chemistry, condensed-matter physics, etc.): we can calculate the probabilities, compare them against observation, and trace the origin of the probabilities to the Born rule.
Of course, if you have a large ensemble of identical copies of your system (or things you regard as identical copies), then that makes validating your probabilistic model a lot more straightforward, for then you can replace step (2) by direct experimental estimation of the probabilities. But in the above, I was careful never to assume that we had lots of identical copies—since if the freebit picture were accepted, then in many cases of interest to us we wouldn’t!
How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?
This seems like too strong a statement. After all, if one knows exactly the initial quantum state at the Big Bang, then one also knows all the freebits. I believe that what you are after is not proving that no theory would allow us to calculate the probabilities, but rather that our current best theory does not. In your example, that knowing any amount of macrofacts from the past still would not allow us to calculate the probabilities of some future macrofacts. My question was about a potential experimental signature of such a situation.
I suspect that this would be a rather worthwhile question to seriously think about, potentially leading to Bell-style insights. I wonder what could be a simple toy model of a situation like that: a general theory G, a partial theory P and a set of experimental data E from which one can conclude that there is no well calibrated set of probabilities P->p(E) derivable from P only, even though there is one from G, G->p(E). Hmm, I might be letting myself to get carried away a bit.
shminux: I don’t know any way, even in principle, to prove that uncertainty is Knightian. (How do you decisively refute someone who claims that if only we had a better theory, we could calculate the probabilities?) Though even here, there’s an interesting caveat. Namely, I also would have thought as a teenager that there could be no way, even in principle, to “prove” something is “truly probabilistic,” rather than deterministic but with complicated hidden parameters. But that was before I learned the Bell/CHSH theorem, which does pretty much exactly that (if you grant some mild locality assumptions)! So it’s at least logically possible that some future physical theory could demand Knightian uncertainty in order to make internal sense, in much the same way that quantum mechanics demands probabilistic uncertainty.
But setting aside that speculative possibility, there’s a much more important point in practice: namely, it’s much easier to rule out that a given source of uncertainty is Knightian, or at least to place upper bounds on how much Knightian uncertainty it can have. To do so, you “merely” have to give a model for the system so detailed that, by using it, you can:
(1) calculate the probability of any event you want, to any desired accuracy,
(2) demonstrate, using repeated tests, that your probabilities are well-calibrated (e.g., of the things you say will happen roughly 60% of the time, roughly 60% of them indeed happen, and moreover the subset of those things that happen passes all the standard statistical tests for not having any further structure), and
(3) crucially, provide evidence that your probabilities don’t merely reflect epistemic ignorance. In practice, this would almost certainly mean providing the causal pathways by which the probabilities can be traced down to the quantum level.
Admittedly, (1)-(3) sound like a tall order! But I’d say that they’ve already been done, more or less, for all sorts of complicated multi-particle quantum systems (in chemistry, condensed-matter physics, etc.): we can calculate the probabilities, compare them against observation, and trace the origin of the probabilities to the Born rule.
Of course, if you have a large ensemble of identical copies of your system (or things you regard as identical copies), then that makes validating your probabilistic model a lot more straightforward, for then you can replace step (2) by direct experimental estimation of the probabilities. But in the above, I was careful never to assume that we had lots of identical copies—since if the freebit picture were accepted, then in many cases of interest to us we wouldn’t!
This seems like too strong a statement. After all, if one knows exactly the initial quantum state at the Big Bang, then one also knows all the freebits. I believe that what you are after is not proving that no theory would allow us to calculate the probabilities, but rather that our current best theory does not. In your example, that knowing any amount of macrofacts from the past still would not allow us to calculate the probabilities of some future macrofacts. My question was about a potential experimental signature of such a situation.
I suspect that this would be a rather worthwhile question to seriously think about, potentially leading to Bell-style insights. I wonder what could be a simple toy model of a situation like that: a general theory G, a partial theory P and a set of experimental data E from which one can conclude that there is no well calibrated set of probabilities P->p(E) derivable from P only, even though there is one from G, G->p(E). Hmm, I might be letting myself to get carried away a bit.