Remember the Bayes mammogram problem? The correct answer is 7.8%; most doctors (and others) intuitively feel like the answer should be about 80%. So doctors – who are specifically trained in having good intuitive judgment about diseases – are wrong by an order of magnitude. And it “only” being one order of magnitude is not to the doctors’ credit: by changing the numbers in the problem we can make doctors’ answers as wrong as we want.
So the doctors probably would be better off explicitly doing the Bayesian calculation. But suppose some doctor’s internet is down (you have NO IDEA how much doctors secretly rely on the Internet) and she can’t remember the prevalence of breast cancer. If the doctor thinks her guess will be off by less than an order of magnitude, then making up a number and plugging it into Bayes will be more accurate than just using a gut feeling about how likely the test is to work. Even making up numbers based on basic knowledge like “Most women do not have breast cancer at any given time” might be enough to make Bayes Theorem outperform intuitive decision-making in many cases.
I tend to side with Yvain on this one, at least so long as your argument isn’t going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that “there’s a chance” doesn’t.
For the opposite claim: If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics:
I tend to side with Yvain on this one, at least so long as your argument isn’t going to be judged by its appearence. Specifically on the LHC thing, I think making up the 1 in 1000 makes it possible to substantively argue about the risks in a way that “there’s a chance” doesn’t.
A detailed reading provides room for these to coexist. Compare:
with
I’d agree with Randall Monroe more wholeheartedly if he had said “added a couple of zeros” instead.