You need “if the number on this device looks to me like the one predicted by theory, then the theory is right” just like you need “if I run billion experiments and frequency looks to me like probability predicted by the theory, then the theory is right”.
You can say that you’re trying to solve a “downward modeling problem” when you try to link any kind of theory you have to the real world. The point of the question is that in some cases the solution to this problem is more clear to us than in others, and in the probabilistic case we seem to be using some unspecified model map to get information content out of the probability measure that comes as part of a probabilistic theory. We’re obviously able to do that but I don’t know how we do it, so that’s what the question is about.
Saying that “it’s just like a deterministic theory” is not a useful comment because it doesn’t answer this question, it just says “there is a similar problem to this which is also difficult to answer, so we should not be optimistic about the prospects of answering this one either”. I’m not sure that I buy that argument, however, since the deterministic and probabilistic cases look sufficiently different to me that I can imagine the probabilistic case being resolved while treating the deterministic one as a given.
So yes, you either abandon the concept of deterministic truth or use probabilistic theory normatively.
You don’t actually know you have to do that, so this seems like a premature statement to make. It also seems highly implausible to me that these are your only two options in light of some of the examples I’ve discussed both in the original question and in the replies to some of the answers people have submitted. Again, I think phase transition models offer a good example.
I don’t get your examples: for a theory that predicts phase transition to have information content in the desired sense you would also need to specify model map. What’s the actual difference with deterministic case? That “solution is more clear”? I mean it’s probably just because of what happened to be implemented in brain hardware or something and I didn’t have the sense that it was what the question was about.
Or is it about non-realist probabilistic theories not specifying what outcomes are impossible in realist sense? Then I don’t understand what’s confusing about treating probabilistic part normatively—that just what being non-realist about probability means.
You can say that you’re trying to solve a “downward modeling problem” when you try to link any kind of theory you have to the real world. The point of the question is that in some cases the solution to this problem is more clear to us than in others, and in the probabilistic case we seem to be using some unspecified model map to get information content out of the probability measure that comes as part of a probabilistic theory. We’re obviously able to do that but I don’t know how we do it, so that’s what the question is about.
Saying that “it’s just like a deterministic theory” is not a useful comment because it doesn’t answer this question, it just says “there is a similar problem to this which is also difficult to answer, so we should not be optimistic about the prospects of answering this one either”. I’m not sure that I buy that argument, however, since the deterministic and probabilistic cases look sufficiently different to me that I can imagine the probabilistic case being resolved while treating the deterministic one as a given.
You don’t actually know you have to do that, so this seems like a premature statement to make. It also seems highly implausible to me that these are your only two options in light of some of the examples I’ve discussed both in the original question and in the replies to some of the answers people have submitted. Again, I think phase transition models offer a good example.
Hence it’s a comment and not an answer^^.
I don’t get your examples: for a theory that predicts phase transition to have information content in the desired sense you would also need to specify model map. What’s the actual difference with deterministic case? That “solution is more clear”? I mean it’s probably just because of what happened to be implemented in brain hardware or something and I didn’t have the sense that it was what the question was about.
Or is it about non-realist probabilistic theories not specifying what outcomes are impossible in realist sense? Then I don’t understand what’s confusing about treating probabilistic part normatively—that just what being non-realist about probability means.