Is this a standard frequentist idea? Is there a link to a longer explanation somewhere? Well-calibrated priors and well-calibrated likelihood ratios should result in well-calibrated posteriors.
Valid confidence coverage is a standard frequentist idea. Wikipedia’s article on the subject is a good introduction. I’ve added the link to the post.
The problem is exactly: how do you get a well-calibrated prior when you know very little about the question at hand? If your posterior is well-calibrated, your prior must have been as well. So, seek a prior that guarantees posterior calibration. This is the “matching prior” program I described above.
This sounds like Gibbs sampling or expectation maximization. Are Gibbs and/or EM considered Bayesian or frequentist? (And what’s the difference between them?)
Gibbs sampling and EM aren’t relevant to the ideas of this post.
Neither Gibbs sampling nor EM is intrinsically Bayesian or frequentist. EM is just a maximization algorithm useful for certain special cases; the maximized function could be a likelihood or a posterior density. Gibbs sampling is just a MCMC algorithm; usually the target distribution is a Bayesian posterior distribution, but it doesn’t have to be.
Is this a standard frequentist idea? Is there a link to a longer explanation somewhere? Well-calibrated priors and well-calibrated likelihood ratios should result in well-calibrated posteriors.
Valid confidence coverage is a standard frequentist idea. Wikipedia’s article on the subject is a good introduction. I’ve added the link to the post.
The problem is exactly: how do you get a well-calibrated prior when you know very little about the question at hand? If your posterior is well-calibrated, your prior must have been as well. So, seek a prior that guarantees posterior calibration. This is the “matching prior” program I described above.
This sounds like Gibbs sampling or expectation maximization. Are Gibbs and/or EM considered Bayesian or frequentist? (And what’s the difference between them?)
Gibbs sampling and EM aren’t relevant to the ideas of this post.
Neither Gibbs sampling nor EM is intrinsically Bayesian or frequentist. EM is just a maximization algorithm useful for certain special cases; the maximized function could be a likelihood or a posterior density. Gibbs sampling is just a MCMC algorithm; usually the target distribution is a Bayesian posterior distribution, but it doesn’t have to be.
You said, “seek a prior that guarantees posterior calibration.” That’s what both EM and Gibbs sampling do, which is why I asked.
You and I have very different understandings of what EM and Gibbs sampling accomplish. Do you have references for your point of view?