Gibbs sampling and EM aren’t relevant to the ideas of this post.
Neither Gibbs sampling nor EM is intrinsically Bayesian or frequentist. EM is just a maximization algorithm useful for certain special cases; the maximized function could be a likelihood or a posterior density. Gibbs sampling is just a MCMC algorithm; usually the target distribution is a Bayesian posterior distribution, but it doesn’t have to be.
Gibbs sampling and EM aren’t relevant to the ideas of this post.
Neither Gibbs sampling nor EM is intrinsically Bayesian or frequentist. EM is just a maximization algorithm useful for certain special cases; the maximized function could be a likelihood or a posterior density. Gibbs sampling is just a MCMC algorithm; usually the target distribution is a Bayesian posterior distribution, but it doesn’t have to be.
You said, “seek a prior that guarantees posterior calibration.” That’s what both EM and Gibbs sampling do, which is why I asked.
You and I have very different understandings of what EM and Gibbs sampling accomplish. Do you have references for your point of view?