I believe the hard-line Bayesian response to that would be that model checking should itself be a Bayesian process.
and
“But,” the soft Bayesians might say, “how do you expand that ‘something else’ into new models by Bayesian means? You would need a universal prior, a prior whose support includes every possible hypothesis. Where do you get one of those? Solomonoff? Ha! And if what you actually do when your model doesn’t fit looks the same as what we do, why pretend it’s Bayesian inference?”
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology—though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I’m not saying this is the only use for statistics, but I can’t seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct—i.e. let’s say Cox’s theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that’s not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don’t have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update—you don’t look at the data twice, and you’re actually performing a Bayesian update. But you don’t get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there’s always some chance that the class of models you’ve chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won’t give the best computationally feasible approximation in some situation.
So, basically, a “hardline Bayesian” should do model checking and sometimes even frequentist statistics. (Similarly, a “hardline frequentist” in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
In response to:
and
I think a hard line needs to be drawn between statistics and epistemology. Statistics is merely a method of approximating epistemology—though a very useful one. The best statistical method in a given situation is the one that best approximates correct epistemology. (I’m not saying this is the only use for statistics, but I can’t seem to make sense of it otherwise)
Now suppose Bayesian epistemology is correct—i.e. let’s say Cox’s theorem + Solomonoff prior. The correct answer to any induction problem is to do the true Bayesian update implied by this epistemology, but that’s not computable. Statistics gives us some common ways to get around this problem. Here are a couple:
1) Bayesian statistics approach: restrict the class of possible models and put a reasonable prior over that class, then do the Bayesian update. This has exactly the same problem that Mencius and Cosma pointed out.
2) Frequentist statistics approach: restrict the class of possible models and come up with a consistent estimate of which model in that class is correct. This has all the problems that Bayesians constantly criticize frequentists for, but it typically allows for a much wider class of possible models in some sense (crucially, you often don’t have to assume distributional forms)
3) Something hybrid: e.g., Bayesian statistics with model checking. Empirical Bayes (where the prior is estimated from the data). Etc.
Now superficially, 1) looks the most like the true Bayesian update—you don’t look at the data twice, and you’re actually performing a Bayesian update. But you don’t get points for looking like the true Bayesian update, you get points for giving the same answer as the true Bayesian update. If you do 1), there’s always some chance that the class of models you’ve chosen is too restrictive for some reason. Theoretically you could continue to do 1) by just expanding the class of possible models and putting a prior over that class, but at some point that becomes computationally infeasible. Model checking is a computationally feasible way of approximating this process. And, a priori, I see no reason to think that some frequentist method won’t give the best computationally feasible approximation in some situation.
So, basically, a “hardline Bayesian” should do model checking and sometimes even frequentist statistics. (Similarly, a “hardline frequentist” in the epistemological sense should sometimes do Bayesian statistics. And, in fact, they do this all the time in econometrics.)
See my similar comments here and here.