It doesn’t have to be infinitely complex. Let’s say there are only ten ravens and ten crows, each of which can be black or white. Chapman says I can’t talk about them using probability theory because there are two kinds of objects, so I need meta-probabilities and quantifiers and whatnot. But I don’t need any of that stuff, it’s enough to have a prior over possible worlds, which would be finite and rather small.
That amounts to saying that Bayes works in finite, restricted cases, which no one is disputing. The thing is that you scheme doesn’t work in the general case.
How can you “have” an infinitely complex prior?
It doesn’t have to be infinitely complex. Let’s say there are only ten ravens and ten crows, each of which can be black or white. Chapman says I can’t talk about them using probability theory because there are two kinds of objects, so I need meta-probabilities and quantifiers and whatnot. But I don’t need any of that stuff, it’s enough to have a prior over possible worlds, which would be finite and rather small.
Only you need to keep switching priors to deal with one finite and small problem after another. Whatever that is, it is not strong Bayes.
That amounts to saying that Bayes works in finite, restricted cases, which no one is disputing. The thing is that you scheme doesn’t work in the general case.