How does one express how confident or uncertain about a probability estimate one is, in numeric terms?
Consider some different situations where you might have 50% confidence in a “yes” answer to a question:
Will this atom of hydrogen have decayed after its half-life?
After this fair coin is flipped, will it land heads?
After this biased coin is flipped, will it land heads?
You didn’t quite hear some question and all you know is that it’s a yes/no question and has an objective answer. Is the answer “yes”? You don’t really know whether in general, “yes” answers are more likely to be correct than “no” answers.
Will Biden win the 2020 election?
These are all things where you might assign a 50% probability, but you’d probably be more confident in some of your answers than others, and that confidence depends on your knowledge in the area. How do we express this difference in confidence?
Here are some possible ways I’ve thought of:
Using fuzzy words like “I feel pretty confident in this 50% prediction”.
Completely avoiding assigning numerical probabilities, in an attempt to prevent people from taking the probabilities more seriously than they should.
Writing a range of probabilities, such as “5–10%”. It’s not clear what a range of probabilities actually means. If that is an x% confidence bound, what would a confidence bound of probabilities even mean?
Drawing a probability density graph. A wider spread might indicate more uncertainty. When we have only have two outcomes like “yes” and “no”, for example, however, we might use a workaround such as “What probability would you assign to tribbles [1] being sentient after 1000 more hours of research?” (elaborated on in this comment by NunoSempere).
[Question] How to quantify uncertainty about a probability estimate?
How does one express how confident or uncertain about a probability estimate one is, in numeric terms?
Consider some different situations where you might have 50% confidence in a “yes” answer to a question:
Will this atom of hydrogen have decayed after its half-life?
After this fair coin is flipped, will it land heads?
After this biased coin is flipped, will it land heads?
You didn’t quite hear some question and all you know is that it’s a yes/no question and has an objective answer. Is the answer “yes”? You don’t really know whether in general, “yes” answers are more likely to be correct than “no” answers.
Will Biden win the 2020 election?
These are all things where you might assign a 50% probability, but you’d probably be more confident in some of your answers than others, and that confidence depends on your knowledge in the area. How do we express this difference in confidence?
Here are some possible ways I’ve thought of:
Using fuzzy words like “I feel pretty confident in this 50% prediction”.
Completely avoiding assigning numerical probabilities, in an attempt to prevent people from taking the probabilities more seriously than they should.
Writing a range of probabilities, such as “5–10%”. It’s not clear what a range of probabilities actually means. If that is an x% confidence bound, what would a confidence bound of probabilities even mean?
Drawing a probability density graph. A wider spread might indicate more uncertainty. When we have only have two outcomes like “yes” and “no”, for example, however, we might use a workaround such as “What probability would you assign to tribbles [1] being sentient after 1000 more hours of research?” (elaborated on in this comment by NunoSempere).
Using a theory of imprecise probabilities (https://plato.stanford.edu/entries/imprecise-probabilities/). I’d like a theory that boils down to regular probability theory.
Describing the amount of evidence it would take for you to change your mind. It’d be hard to use this to compare different problems.