A good rule of thumb might be, “If I added a zero to this number, would the sentence containing it mean something different to me?” If the answer is “no,” maybe the number has no business being in the sentence in the first place.
I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers. (...) you shouldn’t go around thinking that, if you translate your gut feeling into “one in a thousand”, then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times. Your brain is not so well-calibrated.
This specific topic came up recently in the context of the Large Hadron Collider (...) the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.
I object to the air of authority given these numbers pulled out of thin air. (...) No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities
Remember the Bayes mammogram problem? The correct answer is 7.8%; most doctors (and others) intuitively feel like the answer should be about 80%. So doctors – who are specifically trained in having good intuitive judgment about diseases – are wrong by an order of magnitude. And it “only” being one order of magnitude is not to the doctors’ credit: by changing the numbers in the problem we can make doctors’ answers as wrong as we want.
So the doctors probably would be better off explicitly doing the Bayesian calculation. But suppose some doctor’s internet is down (you have NO IDEA how much doctors secretly rely on the Internet) and she can’t remember the prevalence of breast cancer. If the doctor thinks her guess will be off by less than an order of magnitude, then making up a number and plugging it into Bayes will be more accurate than just using a gut feeling about how likely the test is to work. Even making up numbers based on basic knowledge like “Most women do not have breast cancer at any given time” might be enough to make Bayes Theorem outperform intuitive decision-making in many cases.
I tend to side with Yvain on this one, at least so long as your argument isn’t going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that “there’s a chance” doesn’t.
Randall Munroe on communicating with humans
Related: When (Not) To Use Probabilities:
For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:
I tend to side with Yvain on this one, at least so long as your argument isn’t going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that “there’s a chance” doesn’t.
A detailed reading provides room for these to coexist. Compare:
with
I’d agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.