I admit my math background is limited to upper-division undergraduate, and I admit I could have tried harder to make sense of the jargon, but after reading this a few times, I really just don’t get what your point is, or even what kind of thing your point is supposed to be.
The short short version of this part of the argument reads:
What Bayesians call calibration, frequentists call valid confidence coverage. Bayesian posterior probability intervals do not have valid confidence coverage in general; priors that can guarantee it do not exist.
Suppose the actual frequentist probability of an event is 90%. Your prior distribution for the frequentist probability of the event is uniform. Your Bayesian probability of the event will start at 50% and approach 90%; in the long run, the average will be less than 90%.
If the post is getting at more than this, I understand as little as you do. My answer to the title question was “no, they can’t be” going in, and if the post is trying to say something I haven’t understood, then I hope to convince the author e’s wrong through sheer disagreement.
Try rephrasing your first paragraph when the quantity of interest is not a frequency but, say, Avogadro’s number, and you’re Jean Perrin trying to determine exactly what that number is.
A frequentist would take a probability model for the data you’re generating and give you a confidence interval. A billion scientists repeat your experiments, getting their own data and their own intervals. Among those intervals, the proportion that contain the true value of Avogadro’s number is equal to the confidence (up to sampling error).
A Bayesian would take the same probability model, plus a prior, and combine them using Bayes. Each scientist may have her own prior, and posterior calibration is only guaranteed if (i) all the priors taken as a group were calibrated, or, (ii) everyone is using the matching prior if it exists (these are typically improper, so prior calibration cannot be calculated).
I don’t get it.
I admit my math background is limited to upper-division undergraduate, and I admit I could have tried harder to make sense of the jargon, but after reading this a few times, I really just don’t get what your point is, or even what kind of thing your point is supposed to be.
The short short version of this part of the argument reads:
What Bayesians call calibration, frequentists call valid confidence coverage. Bayesian posterior probability intervals do not have valid confidence coverage in general; priors that can guarantee it do not exist.
Suppose the actual frequentist probability of an event is 90%. Your prior distribution for the frequentist probability of the event is uniform. Your Bayesian probability of the event will start at 50% and approach 90%; in the long run, the average will be less than 90%.
If the post is getting at more than this, I understand as little as you do. My answer to the title question was “no, they can’t be” going in, and if the post is trying to say something I haven’t understood, then I hope to convince the author e’s wrong through sheer disagreement.
Try rephrasing your first paragraph when the quantity of interest is not a frequency but, say, Avogadro’s number, and you’re Jean Perrin trying to determine exactly what that number is.
A frequentist would take a probability model for the data you’re generating and give you a confidence interval. A billion scientists repeat your experiments, getting their own data and their own intervals. Among those intervals, the proportion that contain the true value of Avogadro’s number is equal to the confidence (up to sampling error).
A Bayesian would take the same probability model, plus a prior, and combine them using Bayes. Each scientist may have her own prior, and posterior calibration is only guaranteed if (i) all the priors taken as a group were calibrated, or, (ii) everyone is using the matching prior if it exists (these are typically improper, so prior calibration cannot be calculated).