For example, it is easy to see that an agent should not believe that its own beliefs are well-calibrated on all questions
Hmm. If it believed itself to be well-calibrated on questions where it is certain we have Loeb’s paradox, but are there any obvious problems with an agent that thinks it is well-calibrated on all questions where it is not certain?
Hmm. If it believed itself to be well-calibrated on questions where it is certain we have Loeb’s paradox, but are there any obvious problems with an agent that thinks it is well-calibrated on all questions where it is not certain?
Nice choice of phrase.