What does it mean to have certainty over a degree of certainty?
When I say “I’m 99% certain that my prediction ‘the dice has a 1 in 6 chance of rolling a five’ is correct”, I’m having a degree of certainty about my degree of certainty. I’m basically making a prediction about how good I am at predicting.
How do you go about measuring whether or not the certainty is right?
This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I’m basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.
I’m not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.
Basically you are not speaking about Bayesian probability but about frequentist probability? If that’s the case it’s quite good to be explicit about it when you post on LessWrong where we usually mean the Bayesian thing.
In the sense the term probability is used in scientific realism, it’s defined about well-defined empiric events either happening or not happening. Event X has probability Y however isn’t an empiric event and thus it doesn’t have a probability the same way that empiric events do.
If it would be easy to define a meta-certainity metric, then it would be easy for you to reference a statistician who properly defined such a thing or a philosopher in the tradition of scientific realism. Even when it’s intuitively desireable to define such a thing it’s not easy to create it.
Probability is easy to resolve when things have clear outcomes. I don’t find it trivial to apply it to probability distributions. Say that you belive that a coin has 50% chance of coming up heads and 50% chance of coming up tails. Later it turns out that the coin has 49.9% chance of coming up heads and 49.9% chance of coming up tails and 0.2% chance of coming up on it’s side. Does the previous belief count as a hit or miss for the purposes of meta-certainty? If I can’t agree what hits and misses are then I can’t get to ratios.
One could also mean that a belief like “probability for world war” could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers. There “belief professed to when asked” has clear outcomes. But that is harder to link to the subject matter of the belief.
It could also point to “order of defence” kind of thing, which beliefs would be first in line to be changed. High degree of this kind could mean a thing like “this belief is so important to my worldview that I would rather believe 2+2=5 than disbelieve it”. “conviction” could describe it but I think subjective degrees of belief are not supposed point to things like that.
Does the previous belief count as a hit or miss for the purposes of meta-certainty?
A miss. I would like to be able to quantify how far off certain predictions are. I mean sometimes you can quantify it but sometimes you can’t. I have previously made a question posts about it that got very little traction so I’m gonna try to solve this philosophical problem myself once I have some more time.
One could also mean that a belief like “probability for world war” could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers.
This could be a possible bias in meta-certainty that could be discovered (but isn’t the concept of meta-certainty itself).
“conviction” could describe it but I think subjective degrees of belief are not supposed point to things like that.
Conviction could be an adequate word for it, but I’ll stick with meta-certainty to avoid confusion. You could rank your meta-certainty in “order of defense”, but I would start out explaining it in the way that I did in my response to ChristianKl.
Well it clarifies that the first of the three kind of directions was intended.
If that is a miss what do hits look like? If I have a belief of 50%, 50% coin at what point can I say that the distribution is “confirmed”. If the true distribution is 49.9999% vs 50.0001% and that counts as a miss that would make almost all beliefs to be misses with hits being rare theorethical possibiliies. So within rounding error all beliefs that reference probablities not 1 or 0 have meta-certainty 0.
Note that in calculating p-values the null hypothesis is not ever delineated a clear miss but there always remains a finite possiblity that noise was the source of the pattern.
I was trying to convey the same problem, although the underlying issue has much broader implications. Apparently johnswentworth is trying to solve a related problem but I’m currently not up to date with his posts so I can’t vouch for the quality. Being able to quantify empirical differences would solve a lot of different philosophical problems in one fell swoop, so that might be something I should look into for my masters degree.
Your degree of certainty about your degree of certainty. That’s why it’s called meta-certainty.
That doesn’t operationalize what it means to have a degree of certainty over a degree of certainty.
What does it mean to have certainty over a degree of cetainty? How do you go about measuring whether or not the certainty is right?
When I say “I’m 99% certain that my prediction ‘the dice has a 1 in 6 chance of rolling a five’ is correct”, I’m having a degree of certainty about my degree of certainty. I’m basically making a prediction about how good I am at predicting.
This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I’m basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.
I’m not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.
Basically you are not speaking about Bayesian probability but about frequentist probability? If that’s the case it’s quite good to be explicit about it when you post on LessWrong where we usually mean the Bayesian thing.
In the sense the term probability is used in scientific realism, it’s defined about well-defined empiric events either happening or not happening. Event X has probability Y however isn’t an empiric event and thus it doesn’t have a probability the same way that empiric events do.
If it would be easy to define a meta-certainity metric, then it would be easy for you to reference a statistician who properly defined such a thing or a philosopher in the tradition of scientific realism. Even when it’s intuitively desireable to define such a thing it’s not easy to create it.
Probability is easy to resolve when things have clear outcomes. I don’t find it trivial to apply it to probability distributions. Say that you belive that a coin has 50% chance of coming up heads and 50% chance of coming up tails. Later it turns out that the coin has 49.9% chance of coming up heads and 49.9% chance of coming up tails and 0.2% chance of coming up on it’s side. Does the previous belief count as a hit or miss for the purposes of meta-certainty? If I can’t agree what hits and misses are then I can’t get to ratios.
One could also mean that a belief like “probability for world war” could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers. There “belief professed to when asked” has clear outcomes. But that is harder to link to the subject matter of the belief.
It could also point to “order of defence” kind of thing, which beliefs would be first in line to be changed. High degree of this kind could mean a thing like “this belief is so important to my worldview that I would rather believe 2+2=5 than disbelieve it”. “conviction” could describe it but I think subjective degrees of belief are not supposed point to things like that.
A miss. I would like to be able to quantify how far off certain predictions are. I mean sometimes you can quantify it but sometimes you can’t. I have previously made a question posts about it that got very little traction so I’m gonna try to solve this philosophical problem myself once I have some more time.
This could be a possible bias in meta-certainty that could be discovered (but isn’t the concept of meta-certainty itself).
Conviction could be an adequate word for it, but I’ll stick with meta-certainty to avoid confusion. You could rank your meta-certainty in “order of defense”, but I would start out explaining it in the way that I did in my response to ChristianKl.
Well it clarifies that the first of the three kind of directions was intended.
If that is a miss what do hits look like? If I have a belief of 50%, 50% coin at what point can I say that the distribution is “confirmed”. If the true distribution is 49.9999% vs 50.0001% and that counts as a miss that would make almost all beliefs to be misses with hits being rare theorethical possibiliies. So within rounding error all beliefs that reference probablities not 1 or 0 have meta-certainty 0.
Note that in calculating p-values the null hypothesis is not ever delineated a clear miss but there always remains a finite possiblity that noise was the source of the pattern.
I was trying to convey the same problem, although the underlying issue has much broader implications. Apparently johnswentworth is trying to solve a related problem but I’m currently not up to date with his posts so I can’t vouch for the quality. Being able to quantify empirical differences would solve a lot of different philosophical problems in one fell swoop, so that might be something I should look into for my masters degree.