You’re welcome for the link, and it’s more than repaid by your causal inference restatement of the Robins-Ritov problem.
Of course arguably this entire setting is one Bayesians don’t worry about (but maybe they should? These settings do come up).
Yeah, I think this is the heart of the confusion. When you encounter a problem, you can turn the Bayesian crank and it will always do the Right thing, but it won’t always do the right thing. What I find disconcerting (as a Bayesian drifting towards frequentism) is that it’s not obvious how to assess the adequacy of a Bayesian analysis from within the Bayesian framework. In principle, you can do this mindlessly by marginalizing over all the model classes that might apply, maybe? But in practice, a single model class usually gets picked by non-Bayesian criteria like “does the posterior depend on the data in the right way?” or “does the posterior capture the ‘true model’ from simulated data?”. Or a Bayesian may (rightly or wrongly) decide that a Bayesian analysis is not appropriate in that setting.
You’re welcome for the link, and it’s more than repaid by your causal inference restatement of the Robins-Ritov problem.
Yeah, I think this is the heart of the confusion. When you encounter a problem, you can turn the Bayesian crank and it will always do the Right thing, but it won’t always do the right thing. What I find disconcerting (as a Bayesian drifting towards frequentism) is that it’s not obvious how to assess the adequacy of a Bayesian analysis from within the Bayesian framework. In principle, you can do this mindlessly by marginalizing over all the model classes that might apply, maybe? But in practice, a single model class usually gets picked by non-Bayesian criteria like “does the posterior depend on the data in the right way?” or “does the posterior capture the ‘true model’ from simulated data?”. Or a Bayesian may (rightly or wrongly) decide that a Bayesian analysis is not appropriate in that setting.