I agree with your final paragraph – I’m fine with assuming there is a true probability. That said, I think there’s an important difference between how accurate a prediction was, which can be straight-forwardly defined as its similarity to the true probability, and how good of a job the predictor did.
If we’re just talking about the former, then I don’t disagree with anything you’ve said, except that I would question calling it an “epistemically good” prediction – “epistemically good” sounds to me like it refers to performance. Either way, mere accuracy seems like the less interesting thing of the two.
If we’re talking about the latter, then using the true probability as a comparison is problematic even in principle because it might not correspond to any intuitive notion of a good prediction. I see two separate problems:
There could be hidden variables. Suppose there is an election between candidate A and candidate B. Unbeknownst to everyone, candidate A has a brain tumor that will dramatically manifest itself three days before election day. Given this, the true probability that A wins is very low. But that can’t mean people who assign low probabilities to A winning all did a good job – by assumption, their prediction was unrelated to the reason the probability was low.
Even if there are no hidden variables, it might be that accuracy doesn’t monotonically increase with improved competence. Say there’s another election (no brain tumor involved). We can imagine that all of the following is true:
Naive people will assign about 50⁄50 odds
Smart people will recognize that candidate A will have better debate performance and will assign 60⁄40 odds
Very smart people will recognize that B’s poor debate performance will actually help them because it makes them relatable, so they will assign 30⁄70 odds
Extremely smart people will recognize that the economy is likely to crash before election day which will hurt B’s chances more than everything else and will assign 80⁄20 odds. This is similar to the true probability.
In this case, going from smart to very smart actually makes your prediction worse, even though you picked up on a real phenomenon.
I personally think it might be possible to define the quality of a single prediction in a way that includes the true probability, but but I don’t think it’s straight-forward.
I agree with your final paragraph – I’m fine with assuming there is a true probability. That said, I think there’s an important difference between how accurate a prediction was, which can be straight-forwardly defined as its similarity to the true probability, and how good of a job the predictor did.
If we’re just talking about the former, then I don’t disagree with anything you’ve said, except that I would question calling it an “epistemically good” prediction – “epistemically good” sounds to me like it refers to performance. Either way, mere accuracy seems like the less interesting thing of the two.
If we’re talking about the latter, then using the true probability as a comparison is problematic even in principle because it might not correspond to any intuitive notion of a good prediction. I see two separate problems:
There could be hidden variables. Suppose there is an election between candidate A and candidate B. Unbeknownst to everyone, candidate A has a brain tumor that will dramatically manifest itself three days before election day. Given this, the true probability that A wins is very low. But that can’t mean people who assign low probabilities to A winning all did a good job – by assumption, their prediction was unrelated to the reason the probability was low.
Even if there are no hidden variables, it might be that accuracy doesn’t monotonically increase with improved competence. Say there’s another election (no brain tumor involved). We can imagine that all of the following is true:
Naive people will assign about 50⁄50 odds
Smart people will recognize that candidate A will have better debate performance and will assign 60⁄40 odds
Very smart people will recognize that B’s poor debate performance will actually help them because it makes them relatable, so they will assign 30⁄70 odds
Extremely smart people will recognize that the economy is likely to crash before election day which will hurt B’s chances more than everything else and will assign 80⁄20 odds. This is similar to the true probability.
In this case, going from smart to very smart actually makes your prediction worse, even though you picked up on a real phenomenon.
I personally think it might be possible to define the quality of a single prediction in a way that includes the true probability, but but I don’t think it’s straight-forward.