Your criticism of Dutch Book is that it doesn’t seem to you useful to add anti-Dutch-book checkers to your toolbox. My support of Dutch Book is that if something inherently produces Dutch Books then it can’t be the right epistemological principle because clearly some of its answers must be wrong even in the limit of well-calibrated prior knowledge and unbounded computing power.
The complete class theorem I understand least of the set, and it’s probably not very much entwined with my true rejection so it would be logically rude to lead you on here. Again, though, the point that every local optimum is Bayesian tells us something about non-Bayesian rules producing intrinsically wrong answers. If I believed your criticism, I think it would be forceful; I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan—the plans which are equivalent of “Just buy the winning lottery ticket, and you’ll make more money!” I can imagine being shown that the complete class theorem demonstrates only an “unfair” superiority of this sort, and that only frequentist methods can produce actual outputs for realistic situations even in the limit of unbounded computing power. But I do not believe that you have leveled such a criticism. And it doesn’t square very much with my current understanding that the decision rules being considered are computable rules from observations to actions. You didn’t actually tell me about a frequentist algorithm which is supposed to be realistic and show why the Bayesian rule which beats it is beating it unfairly.
If you want to hit me square in the true rejection I suggest starting with VNM. The fact that our epistemology has to plug into our actions is one reason why I roll my eyes at the likes of Dempster-Shafer or frequentist confidence intervals that don’t convert to credibility distributions.
I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan
We already live in that world.
(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it’s very optimal.
One of the criticisms I raised is that merely being able to point to all the local optima is not a particularly impressive property of an epistemological theory. Many of those local optima will be horrible! (My criticism of VNM is essentially the same.)
Many frequentist methods, such as minimax, also provide local optima, but they provide local optima which actually have certain nice properties. And minimax provides a complete decision rule, not just a probability distribution, so it plugs directly into actions.
Your criticism of Dutch Book is that it doesn’t seem to you useful to add anti-Dutch-book checkers to your toolbox. My support of Dutch Book is that if something inherently produces Dutch Books then it can’t be the right epistemological principle because clearly some of its answers must be wrong even in the limit of well-calibrated prior knowledge and unbounded computing power.
The complete class theorem I understand least of the set, and it’s probably not very much entwined with my true rejection so it would be logically rude to lead you on here. Again, though, the point that every local optimum is Bayesian tells us something about non-Bayesian rules producing intrinsically wrong answers. If I believed your criticism, I think it would be forceful; I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan—the plans which are equivalent of “Just buy the winning lottery ticket, and you’ll make more money!” I can imagine being shown that the complete class theorem demonstrates only an “unfair” superiority of this sort, and that only frequentist methods can produce actual outputs for realistic situations even in the limit of unbounded computing power. But I do not believe that you have leveled such a criticism. And it doesn’t square very much with my current understanding that the decision rules being considered are computable rules from observations to actions. You didn’t actually tell me about a frequentist algorithm which is supposed to be realistic and show why the Bayesian rule which beats it is beating it unfairly.
If you want to hit me square in the true rejection I suggest starting with VNM. The fact that our epistemology has to plug into our actions is one reason why I roll my eyes at the likes of Dempster-Shafer or frequentist confidence intervals that don’t convert to credibility distributions.
We already live in that world.
(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it’s very optimal.
One of the criticisms I raised is that merely being able to point to all the local optima is not a particularly impressive property of an epistemological theory. Many of those local optima will be horrible! (My criticism of VNM is essentially the same.)
Many frequentist methods, such as minimax, also provide local optima, but they provide local optima which actually have certain nice properties. And minimax provides a complete decision rule, not just a probability distribution, so it plugs directly into actions.