Rounding probabilities to 0% or 100% is not a legitimate operation, because when transformed into odds format, this is rounding to infinity. Many people don’t know that, but I think the sets of people who round to 0⁄1 and the set of people who can make decent probability estimates are pretty disjoint.
I think it depends on context? E.g. for expected value calculations rounding is fine (a 0.0001% risk of contracting a mild disease in a day can often be treated as a 0% risk). It’s not obvious to me that everyone who rounds to 0 or 1 is being epistemically vicious. Indeed, if you asked me to distribute 100% among the five possibilities of HLMI having extremely bad, bad, neutral, good, or extremely good consequences, I’d give integer percentages, and I would probably assign 0% to one or two of those possibilities (unless it was clear from context that I was supposed to be doing something that precludes rounding to 0).
Rounding probabilities to 0% or 100% is not a legitimate operation, because when transformed into odds format, this is rounding to infinity. Many people don’t know that, but I think the sets of people who round to 0⁄1 and the set of people who can make decent probability estimates are pretty disjoint.
I think it depends on context? E.g. for expected value calculations rounding is fine (a 0.0001% risk of contracting a mild disease in a day can often be treated as a 0% risk). It’s not obvious to me that everyone who rounds to 0 or 1 is being epistemically vicious. Indeed, if you asked me to distribute 100% among the five possibilities of HLMI having extremely bad, bad, neutral, good, or extremely good consequences, I’d give integer percentages, and I would probably assign 0% to one or two of those possibilities (unless it was clear from context that I was supposed to be doing something that precludes rounding to 0).
(I do not represent AI Impacts, etc.)