This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I’ll sketch my ideas. (It’s unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)
There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes’ rule can only give one answer.
However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.
Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.
However, we also have an estimate of another quantity, namely ‘the probability that the coin comes up heads’ (generically). I’m going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)
What is important to realize is that the statement ‘the coin will come up heads with probability 50%’ means ‘I believe that in half of all conceivable universes the coin will come up heads’, whereas ‘I am 90% confident that the coin will come up heads with probability 50%’ means something more along the lines of ‘I believe that in 90% of all conceivable universes my models predict a 50% chance of heads’. But there is also the difference that in the second statement, the ’90% of all conceivable universes’ only actually specifies them up to the extent that our models need in order to take over.
I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my ‘confidence in a probability’ corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don’t usually get to see repeated trials (I don’t think this should matter, but unfortunately my intuition is frequentist).
This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I’ll sketch my ideas. (It’s unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)
There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes’ rule can only give one answer.
However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.
Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.
However, we also have an estimate of another quantity, namely ‘the probability that the coin comes up heads’ (generically). I’m going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)
What is important to realize is that the statement ‘the coin will come up heads with probability 50%’ means ‘I believe that in half of all conceivable universes the coin will come up heads’, whereas ‘I am 90% confident that the coin will come up heads with probability 50%’ means something more along the lines of ‘I believe that in 90% of all conceivable universes my models predict a 50% chance of heads’. But there is also the difference that in the second statement, the ’90% of all conceivable universes’ only actually specifies them up to the extent that our models need in order to take over.
I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my ‘confidence in a probability’ corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don’t usually get to see repeated trials (I don’t think this should matter, but unfortunately my intuition is frequentist).