I’ve touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn’t be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they’re going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between “harder” and “easier” phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.
I’ve touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn’t be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they’re going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between “harder” and “easier” phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.