What does it mean to have certainty over a degree of certainty?
When I say “I’m 99% certain that my prediction ‘the dice has a 1 in 6 chance of rolling a five’ is correct”, I’m having a degree of certainty about my degree of certainty. I’m basically making a prediction about how good I am at predicting.
How do you go about measuring whether or not the certainty is right?
This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I’m basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.
I’m not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.
Basically you are not speaking about Bayesian probability but about frequentist probability? If that’s the case it’s quite good to be explicit about it when you post on LessWrong where we usually mean the Bayesian thing.
In the sense the term probability is used in scientific realism, it’s defined about well-defined empiric events either happening or not happening. Event X has probability Y however isn’t an empiric event and thus it doesn’t have a probability the same way that empiric events do.
If it would be easy to define a meta-certainity metric, then it would be easy for you to reference a statistician who properly defined such a thing or a philosopher in the tradition of scientific realism. Even when it’s intuitively desireable to define such a thing it’s not easy to create it.
That doesn’t operationalize what it means to have a degree of certainty over a degree of certainty.
What does it mean to have certainty over a degree of cetainty? How do you go about measuring whether or not the certainty is right?
When I say “I’m 99% certain that my prediction ‘the dice has a 1 in 6 chance of rolling a five’ is correct”, I’m having a degree of certainty about my degree of certainty. I’m basically making a prediction about how good I am at predicting.
This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I’m basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.
I’m not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.
Basically you are not speaking about Bayesian probability but about frequentist probability? If that’s the case it’s quite good to be explicit about it when you post on LessWrong where we usually mean the Bayesian thing.
In the sense the term probability is used in scientific realism, it’s defined about well-defined empiric events either happening or not happening. Event X has probability Y however isn’t an empiric event and thus it doesn’t have a probability the same way that empiric events do.
If it would be easy to define a meta-certainity metric, then it would be easy for you to reference a statistician who properly defined such a thing or a philosopher in the tradition of scientific realism. Even when it’s intuitively desireable to define such a thing it’s not easy to create it.