I am deeply confused by your statement that the complete class theorem only implies that Bayesian techniques are locally optimal. If for EVERY non-Bayesian method there’s a better Bayesian method, then the globally optimal technique must be a Bayesian method.
There is a difference between “the globally optimal technique is Bayesian” and “a Bayesian technique is globally optimal”. In the latter case, we now still have to choose from an infinitely large family of techniques (one for each choice of prior). Bayes doesn’t help me know which of these I should choose. In contrast there are frequentist techniques (e.g. minimax) that will give me a full prescription of what I ought to. Those techniques can in many (but not all) cases be interpreted in terms of a prior, but “choose a prior and update” wasn’t the advice that led me to that decision, rather it was “play the minimax decision rule”.
As I said in my post:
I would much rather have someone hand me something that wasn’t a local optimum but was close to the global optimum, than something that was a local optimum but was far from the global optimum.
I am deeply confused by your statement that the complete class theorem only implies that Bayesian techniques are locally optimal. If for EVERY non-Bayesian method there’s a better Bayesian method, then the globally optimal technique must be a Bayesian method.
There is a difference between “the globally optimal technique is Bayesian” and “a Bayesian technique is globally optimal”. In the latter case, we now still have to choose from an infinitely large family of techniques (one for each choice of prior). Bayes doesn’t help me know which of these I should choose. In contrast there are frequentist techniques (e.g. minimax) that will give me a full prescription of what I ought to. Those techniques can in many (but not all) cases be interpreted in terms of a prior, but “choose a prior and update” wasn’t the advice that led me to that decision, rather it was “play the minimax decision rule”.
As I said in my post: