Bayesian techniques, by choosing a specific prior (such as a Gaussian prior), are making an assumption that will hurt them in an extreme cases or when the data is not drawn from the prior. The tradeoff is that frequentist methods tend to be much more conservative as a result (requiring more data to come to the same conclusion).
Bayesian methods with uninformative (possibly improper) priors agree with frequentist methods whenever the latter make sense.
Can you explain further? Casually, I consider results like compressed sensing and multiplicative weights to be examples of frequentist approaches (as do people working in these areas), which achieve their results in adversarial settings where no prior is available. I would be interested in seeing how Bayesian methods with improper priors recommend similar behavior.
I let you choose some linear functionals, and then tell you the value of each one on some unknown sparse vector (compressed sensing).
We play an iterated game with unknown payoffs; you observe your payoff in each round, but nothing more, and want to maximize total payoff (multiplicative weights).
Put even more simply, what is the Bayesian method that plays randomly in rock-paper-scissors against an unknown adversary? Minimax play seems like a canonical example of a frequentist method; if you have any fixed model of your adversary you might as well play deterministically (at least if you are doing consequentialist loss minimization).
Are you referring to the result that every non-dominated decision procedure is either a Bayesian procedure or a limit of Bayesian procedures? If so, one could imagine a frequentist procedure that is strictly dominated by other procedures, but where finding the dominating procedures is computationally infeasible. Alternately, a procedure could be non-dominated, and thus Bayesian for the right choice of prior, but the correct choice of prior could be difficult to find (the only proof I know of the “non-dominated ⇒ Bayesian” result is non-constructive).
Bayesian methods with uninformative (possibly improper) priors agree with frequentist methods whenever the latter make sense.
Can you explain further? Casually, I consider results like compressed sensing and multiplicative weights to be examples of frequentist approaches (as do people working in these areas), which achieve their results in adversarial settings where no prior is available. I would be interested in seeing how Bayesian methods with improper priors recommend similar behavior.
I admit I’m not familiar with either of those… Can you make a simple example of an “adversarial setting where no prior is available”?
I let you choose some linear functionals, and then tell you the value of each one on some unknown sparse vector (compressed sensing).
We play an iterated game with unknown payoffs; you observe your payoff in each round, but nothing more, and want to maximize total payoff (multiplicative weights).
Put even more simply, what is the Bayesian method that plays randomly in rock-paper-scissors against an unknown adversary? Minimax play seems like a canonical example of a frequentist method; if you have any fixed model of your adversary you might as well play deterministically (at least if you are doing consequentialist loss minimization).
The minimax estimator can be related to Bayesian estimation through the concept of a “least-favorable prior”.
Are you referring to the result that every non-dominated decision procedure is either a Bayesian procedure or a limit of Bayesian procedures? If so, one could imagine a frequentist procedure that is strictly dominated by other procedures, but where finding the dominating procedures is computationally infeasible. Alternately, a procedure could be non-dominated, and thus Bayesian for the right choice of prior, but the correct choice of prior could be difficult to find (the only proof I know of the “non-dominated ⇒ Bayesian” result is non-constructive).