It might be enough. If it’s published in a venue where the authors would get called on bullshit priors, the fact that it’s been published is evidence that they used reasonably good priors.
The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian—the kind of Bayesian for which all of the nice coherence results apply—then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don’t accept any such calibration constraint.
Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. “You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!”
One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.
I agree, but noticing 2 requires looking into how they’ve done the calculations, so simply knowing its bayesian isn’t enough.
It might be enough. If it’s published in a venue where the authors would get called on bullshit priors, the fact that it’s been published is evidence that they used reasonably good priors.
The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian—the kind of Bayesian for which all of the nice coherence results apply—then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don’t accept any such calibration constraint.
Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. “You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!”
One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.