I’m not convinced that improving calibration will not improve accuracy because predictions are often nested within other predictions. For example, suppose we are trying to make a prediction about P, and the truth or falsity of Q, R and S are relevant to the truth of P in some respect. We might use as a basis for guessing P that we are ninety five percent confident in our guesses about Q, R & S, (suppose the truth of all three would guarantee P). Now suppose we become less confident through better calibration and decide there is only a 70% chance that Q, a 70% chance that R and a 70% chance that S, leading to a compound probability of less <50%. Thus overall accuracy can be improved by calibration.
I’m not convinced that improving calibration will not improve accuracy because predictions are often nested within other predictions. For example, suppose we are trying to make a prediction about P, and the truth or falsity of Q, R and S are relevant to the truth of P in some respect. We might use as a basis for guessing P that we are ninety five percent confident in our guesses about Q, R & S, (suppose the truth of all three would guarantee P). Now suppose we become less confident through better calibration and decide there is only a 70% chance that Q, a 70% chance that R and a 70% chance that S, leading to a compound probability of less <50%. Thus overall accuracy can be improved by calibration.