Normally I think of meta-certainty in terms of parameters of an underlying model. I am “sure” that the probability of the coin is 50⁄50 because I have an underlying model where I know that it’s hard to get more information about the coin to resolve that uncertainty. But if I expect to soon gain more information, like if I’m told that it’s a heavily biased coin and I just have to flip it a few times to find out which way it’s biased, then I might talk about this expected gain of information by saying that the probability is only “on average” 0.5, and that I “expect it to be” either 0.1 or 0.9.
Really, I think talking about the model is quite natural, and once you’ve done that there’s no extra thing that is the meta-uncertainty.
Normally I think of meta-certainty in terms of parameters of an underlying model. I am “sure” that the probability of the coin is 50⁄50 because I have an underlying model where I know that it’s hard to get more information about the coin to resolve that uncertainty. But if I expect to soon gain more information, like if I’m told that it’s a heavily biased coin and I just have to flip it a few times to find out which way it’s biased, then I might talk about this expected gain of information by saying that the probability is only “on average” 0.5, and that I “expect it to be” either 0.1 or 0.9.
Really, I think talking about the model is quite natural, and once you’ve done that there’s no extra thing that is the meta-uncertainty.