Quantifying uncertainty is great and all, but also exhausting precious mental energy. I am getting quite fond of giving probability ranges instead of point estimates when I want to communicate my uncertainty quickly. For example: “I’ll probably (40-80%) show up to the party tonight.” For some reason, translating natural language uncertainty words into probability ranges feels more natural (at least to me) so requires less work for the writer.
If the difference is important, the other person can ask, but it still seems better than just saying ‘probably’.
Interesting. For me, thinking/saying “about 60%” is less mental load and feels more natural than “40 to 80%”. It avoids the rabbit-hole of what a range of probabilities even means—presumably that implies your probability estimates are normal around 60% with a standard deviation of 20%, or something.
Is there anything your communication recipient would do differently with a range than a point estimate? presumably they care about the resolution of the event (will you attend) rather than the resolution of the “correct” probability estimate.
There’s a place for “no”, “probably not”, “maybe”, “I hope to”, “probably”, “I think so”, “almost certainly”, and “yes” as a somewhat ambiguous estimate as well, but that’s a separate discussion.
Agree that the meaning of the ranges is very ill-defined. I think I am most often drawn to this when I have a few different heuristics that seem applicable. Example of internals: One is just how likely this feels when I query one of my predictive engines and another is just some very crude “outside view”/eyeballed statistic that estimates how well I did on this in the past. Weighing these against each other causes lots of cognitive dissonance for me, so I don’t like doing it.
I think from the perspective of a radical probabilist, it is very natural to not only have a word of where your current point estimate is at, but also have some tagging for the words indicating how much computation went into it or if this estimate already tries to take the listeners model into account also?
Probably silly
Quantifying uncertainty is great and all, but also exhausting precious mental energy. I am getting quite fond of giving probability ranges instead of point estimates when I want to communicate my uncertainty quickly. For example: “I’ll probably (40-80%) show up to the party tonight.” For some reason, translating natural language uncertainty words into probability ranges feels more natural (at least to me) so requires less work for the writer.
If the difference is important, the other person can ask, but it still seems better than just saying ‘probably’.
Interesting. For me, thinking/saying “about 60%” is less mental load and feels more natural than “40 to 80%”. It avoids the rabbit-hole of what a range of probabilities even means—presumably that implies your probability estimates are normal around 60% with a standard deviation of 20%, or something.
Is there anything your communication recipient would do differently with a range than a point estimate? presumably they care about the resolution of the event (will you attend) rather than the resolution of the “correct” probability estimate.
There’s a place for “no”, “probably not”, “maybe”, “I hope to”, “probably”, “I think so”, “almost certainly”, and “yes” as a somewhat ambiguous estimate as well, but that’s a separate discussion.
Agree that the meaning of the ranges is very ill-defined. I think I am most often drawn to this when I have a few different heuristics that seem applicable. Example of internals: One is just how likely this feels when I query one of my predictive engines and another is just some very crude “outside view”/eyeballed statistic that estimates how well I did on this in the past. Weighing these against each other causes lots of cognitive dissonance for me, so I don’t like doing it.
I think from the perspective of a radical probabilist, it is very natural to not only have a word of where your current point estimate is at, but also have some tagging for the words indicating how much computation went into it or if this estimate already tries to take the listeners model into account also?