Yeah. This is an example where using the actual formula is helpful rather than just speaking heuristically. It’s actually somewhat difficult to translate from the author’s hand-wavy model to the real Bayes’ Theorem (and it would be totally opaque to someone who hadn’t seen Bayes before).
“Study support for headline” is supposed to be the Bayes factor P(study supports headline | headline is true) / P(study supports headline | headline is false). (Well actually, everything is also conditioned on you hearing about the study.) If you actually think about that, it’s clear that it should be very rare to find a study that is more likely to support its conclusion if that conclusion is not true.
If you’re just looking at the study, then it’s quite difficult for the support ratio to be less than one. However, suppose we assume that on average, for every published study, there are 100 unpublished studies, and the one with the lowest p-value gets published. Then if a study has a p-value of .04, that particular study supports the headline. However, the fact that that study was published contradicts the headline: if the headline were true, we would expect the lowest p-value to be lower than .04.
Yes, that’s what I meant by “very rare:” there are situations where it happens, like the model that you gave, but I don’t think ones that happen in real life likely to contribute a very large effect. You need really insane publication bias to get a large effect there.
Yeah. This is an example where using the actual formula is helpful rather than just speaking heuristically. It’s actually somewhat difficult to translate from the author’s hand-wavy model to the real Bayes’ Theorem (and it would be totally opaque to someone who hadn’t seen Bayes before).
“Study support for headline” is supposed to be the Bayes factor P(study supports headline | headline is true) / P(study supports headline | headline is false). (Well actually, everything is also conditioned on you hearing about the study.) If you actually think about that, it’s clear that it should be very rare to find a study that is more likely to support its conclusion if that conclusion is not true.
EDIT: the author is not actually Nate Silver.
If you’re just looking at the study, then it’s quite difficult for the support ratio to be less than one. However, suppose we assume that on average, for every published study, there are 100 unpublished studies, and the one with the lowest p-value gets published. Then if a study has a p-value of .04, that particular study supports the headline. However, the fact that that study was published contradicts the headline: if the headline were true, we would expect the lowest p-value to be lower than .04.
Yes, that’s what I meant by “very rare:” there are situations where it happens, like the model that you gave, but I don’t think ones that happen in real life likely to contribute a very large effect. You need really insane publication bias to get a large effect there.