That doesn’t mean it wasn’t rounding though. People ‘helpfully’ round their own answers all the time before giving them. ‘0’ as a probability simply means that it isn’t going to happen, not necessarily that it couldn’t, while ’100′ then means something that will definitely happen (though it may not be a logical necessity).
In some cases ‘1%’ could actually be lower than ‘0%’ since they are doing different things (and 1% is highly suspect as a round number for ‘extremely unlikely’ too.) Ditto for ’99%′ being higher than ‘100%’ sometimes (while 99% is also a suspicious round number too for, ‘I would be very surprised if it didn’t happen’.)
I don’t think that it would necessarily be telling, but it might be interesting to look at it without numbers from ’99%-100%′ and ‘0%-1%’, and then directly compare them to the results with those numbers included..
Rounding probabilities to 0% or 100% is not a legitimate operation, because when transformed into odds format, this is rounding to infinity. Many people don’t know that, but I think the sets of people who round to 0⁄1 and the set of people who can make decent probability estimates are pretty disjoint.
I think it depends on context? E.g. for expected value calculations rounding is fine (a 0.0001% risk of contracting a mild disease in a day can often be treated as a 0% risk). It’s not obvious to me that everyone who rounds to 0 or 1 is being epistemically vicious. Indeed, if you asked me to distribute 100% among the five possibilities of HLMI having extremely bad, bad, neutral, good, or extremely good consequences, I’d give integer percentages, and I would probably assign 0% to one or two of those possibilities (unless it was clear from context that I was supposed to be doing something that precludes rounding to 0).
A more charitable interpretation is that this is a probability rounded to the nearest percent
Yes, and many respondents tended to give percentages that end in “0” (or sometimes “5″), so maybe some rounded even more.
We didn’t do rounding though, right? Like, these people actually said 0?
That doesn’t mean it wasn’t rounding though. People ‘helpfully’ round their own answers all the time before giving them. ‘0’ as a probability simply means that it isn’t going to happen, not necessarily that it couldn’t, while ’100′ then means something that will definitely happen (though it may not be a logical necessity).
In some cases ‘1%’ could actually be lower than ‘0%’ since they are doing different things (and 1% is highly suspect as a round number for ‘extremely unlikely’ too.) Ditto for ’99%′ being higher than ‘100%’ sometimes (while 99% is also a suspicious round number too for, ‘I would be very surprised if it didn’t happen’.)
I don’t think that it would necessarily be telling, but it might be interesting to look at it without numbers from ’99%-100%′ and ‘0%-1%’, and then directly compare them to the results with those numbers included..
Right.
Rounding probabilities to 0% or 100% is not a legitimate operation, because when transformed into odds format, this is rounding to infinity. Many people don’t know that, but I think the sets of people who round to 0⁄1 and the set of people who can make decent probability estimates are pretty disjoint.
I think it depends on context? E.g. for expected value calculations rounding is fine (a 0.0001% risk of contracting a mild disease in a day can often be treated as a 0% risk). It’s not obvious to me that everyone who rounds to 0 or 1 is being epistemically vicious. Indeed, if you asked me to distribute 100% among the five possibilities of HLMI having extremely bad, bad, neutral, good, or extremely good consequences, I’d give integer percentages, and I would probably assign 0% to one or two of those possibilities (unless it was clear from context that I was supposed to be doing something that precludes rounding to 0).
(I do not represent AI Impacts, etc.)