Interesting. For me, thinking/saying “about 60%” is less mental load and feels more natural than “40 to 80%”. It avoids the rabbit-hole of what a range of probabilities even means—presumably that implies your probability estimates are normal around 60% with a standard deviation of 20%, or something.
Is there anything your communication recipient would do differently with a range than a point estimate? presumably they care about the resolution of the event (will you attend) rather than the resolution of the “correct” probability estimate.
There’s a place for “no”, “probably not”, “maybe”, “I hope to”, “probably”, “I think so”, “almost certainly”, and “yes” as a somewhat ambiguous estimate as well, but that’s a separate discussion.
Agree that the meaning of the ranges is very ill-defined. I think I am most often drawn to this when I have a few different heuristics that seem applicable. Example of internals: One is just how likely this feels when I query one of my predictive engines and another is just some very crude “outside view”/eyeballed statistic that estimates how well I did on this in the past. Weighing these against each other causes lots of cognitive dissonance for me, so I don’t like doing it.
Interesting. For me, thinking/saying “about 60%” is less mental load and feels more natural than “40 to 80%”. It avoids the rabbit-hole of what a range of probabilities even means—presumably that implies your probability estimates are normal around 60% with a standard deviation of 20%, or something.
Is there anything your communication recipient would do differently with a range than a point estimate? presumably they care about the resolution of the event (will you attend) rather than the resolution of the “correct” probability estimate.
There’s a place for “no”, “probably not”, “maybe”, “I hope to”, “probably”, “I think so”, “almost certainly”, and “yes” as a somewhat ambiguous estimate as well, but that’s a separate discussion.
Agree that the meaning of the ranges is very ill-defined. I think I am most often drawn to this when I have a few different heuristics that seem applicable. Example of internals: One is just how likely this feels when I query one of my predictive engines and another is just some very crude “outside view”/eyeballed statistic that estimates how well I did on this in the past. Weighing these against each other causes lots of cognitive dissonance for me, so I don’t like doing it.