Here’s a simple analysis of what’s going on when we feel a prediction is right for the wrong reasons:
One function of the habit of making firm predictions is to test our models of the world and be able to give others strong evidence about how good our models are. What’s happening when we are right for the wrong reasons is that our model is confirmed by the explicitly predicted outcome but then disconfirmed by some detail that wasn’t described in the prediction. The subsequent disconfirmation might be strong enough to cancel out the initial confirmation, or stronger. Even if it only partially cancels out the initial confirmation, focusing only on the evidence given by the explicitly predicted outcome exaggerates the extent to which our model is confirmed by the evidence.
Your fencing example: based solely on disagreement about the the fencers’ relative ability, two predictors make different predictions about whether I win or my opponent wins. I lose 14 touches, but my opponent is disqualified, etc. “My” predictor collects his money, but his model is not clearly vindicated. If we focus only on the explicitly predicted event, his model assigned it a greater probability than the other predictors’ model, and so is confirmed by the explicitly predicted outcome. But conditional on me winning, “my” predictor’s model assigns a (much?) lower probability (compared to the other predictors’ model) to the total evidence that I won in the stated circumstances. So the rest of the total evidence disconfirms “my” predictor’s model, likely strongly enough that his model is disconfirmed on net.
The problem seems to be that coarse-grained predictions only provide coarse-grained information about the accuracy of our models of the world. Since we (qua rationalists) are interested in predictions because they are potentially strong evidence about our models, we should try to make our explicit predictions more fine-grained, as you recommend (while considering the obvious costs).
Here’s a simple analysis of what’s going on when we feel a prediction is right for the wrong reasons:
One function of the habit of making firm predictions is to test our models of the world and be able to give others strong evidence about how good our models are. What’s happening when we are right for the wrong reasons is that our model is confirmed by the explicitly predicted outcome but then disconfirmed by some detail that wasn’t described in the prediction. The subsequent disconfirmation might be strong enough to cancel out the initial confirmation, or stronger. Even if it only partially cancels out the initial confirmation, focusing only on the evidence given by the explicitly predicted outcome exaggerates the extent to which our model is confirmed by the evidence.
Your fencing example: based solely on disagreement about the the fencers’ relative ability, two predictors make different predictions about whether I win or my opponent wins. I lose 14 touches, but my opponent is disqualified, etc. “My” predictor collects his money, but his model is not clearly vindicated. If we focus only on the explicitly predicted event, his model assigned it a greater probability than the other predictors’ model, and so is confirmed by the explicitly predicted outcome. But conditional on me winning, “my” predictor’s model assigns a (much?) lower probability (compared to the other predictors’ model) to the total evidence that I won in the stated circumstances. So the rest of the total evidence disconfirms “my” predictor’s model, likely strongly enough that his model is disconfirmed on net.
The problem seems to be that coarse-grained predictions only provide coarse-grained information about the accuracy of our models of the world. Since we (qua rationalists) are interested in predictions because they are potentially strong evidence about our models, we should try to make our explicit predictions more fine-grained, as you recommend (while considering the obvious costs).