The main thing that jumps out at me is that the strip plays on a caricature of frequentists as unable or unwilling to use background information. (Yes, the strip also caricatures Bayesians as ultimately concerned with betting, which isn’t always true either, but the frequentist is clearly the butt of the joke.) Anyway, Deborah Mayo has been picking on the misconception about frequentists for a while now: see here and here, for examples. I read Mayo as saying, roughly, that of course frequentists make use of background information, they just don’t do it by writing down precise numbers that are supposed to represent either their prior degree of belief in the hypothesis to be tested or a neutral, reference prior (or so-called “uninformative” prior) that is supposed to capture the prior degree of evidential support or some such for the hypothesis to be tested.
It might be enough. If it’s published in a venue where the authors would get called on bullshit priors, the fact that it’s been published is evidence that they used reasonably good priors.
The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian—the kind of Bayesian for which all of the nice coherence results apply—then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don’t accept any such calibration constraint.
Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. “You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!”
One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.
That depends heavily on what “the method” picks out. If you mean that the machinery of a null hypothesis significance test against a fixed-for-all-time significance level of 0.05, then I agree, the method doesn’t promote good practice. But if we’re talking about frequentism, then identifying the method with null hypothesis significance testing looks like attacking a straw man.
I know a bunch of scientists who learned a ton of canned tricks and take the (frequentist) statisticians’ word on how likely associations are… and the statisticians never bothered to ask how a priori likely these associations were.
If this is a straw man, it is one that has regrettably been instantiated over and over again in real life.
If not using background information means you can publish your paper with frequentists methods, scientists often don’t use background information.
Those scientifists who don’t use less background information get more significant results. Therefore they get more published papers. Then they get more funding than the people who use more background information. It’s publish or perish.
You could be right, but I am skeptical. I would like to see evidence—preferably in the form of bibliometric analysis—that practicing scientists who use frequentist statistical techniques (a) don’t make use of background information, and (b) publish more successfully than comparable scientists who do make use of background information.
Fair? No. Funny? Yes!
The main thing that jumps out at me is that the strip plays on a caricature of frequentists as unable or unwilling to use background information. (Yes, the strip also caricatures Bayesians as ultimately concerned with betting, which isn’t always true either, but the frequentist is clearly the butt of the joke.) Anyway, Deborah Mayo has been picking on the misconception about frequentists for a while now: see here and here, for examples. I read Mayo as saying, roughly, that of course frequentists make use of background information, they just don’t do it by writing down precise numbers that are supposed to represent either their prior degree of belief in the hypothesis to be tested or a neutral, reference prior (or so-called “uninformative” prior) that is supposed to capture the prior degree of evidential support or some such for the hypothesis to be tested.
Good frequentists do that. The method itself doesn’t promote this good practice.
And bad Bayesians use crazy priors,
1) There is no framework so secure that no one is dumb enough to foul it up.
2) By having to use a crazy prior explicitly, this brings the failure point forward in one’s attention.
I agree, but noticing 2 requires looking into how they’ve done the calculations, so simply knowing its bayesian isn’t enough.
It might be enough. If it’s published in a venue where the authors would get called on bullshit priors, the fact that it’s been published is evidence that they used reasonably good priors.
The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian—the kind of Bayesian for which all of the nice coherence results apply—then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don’t accept any such calibration constraint.
Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. “You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!”
One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.
That depends heavily on what “the method” picks out. If you mean that the machinery of a null hypothesis significance test against a fixed-for-all-time significance level of 0.05, then I agree, the method doesn’t promote good practice. But if we’re talking about frequentism, then identifying the method with null hypothesis significance testing looks like attacking a straw man.
I know a bunch of scientists who learned a ton of canned tricks and take the (frequentist) statisticians’ word on how likely associations are… and the statisticians never bothered to ask how a priori likely these associations were.
If this is a straw man, it is one that has regrettably been instantiated over and over again in real life.
If not using background information means you can publish your paper with frequentists methods, scientists often don’t use background information.
Those scientifists who don’t use less background information get more significant results. Therefore they get more published papers. Then they get more funding than the people who use more background information. It’s publish or perish.
You could be right, but I am skeptical. I would like to see evidence—preferably in the form of bibliometric analysis—that practicing scientists who use frequentist statistical techniques (a) don’t make use of background information, and (b) publish more successfully than comparable scientists who do make use of background information.