I’ve had some training in Bayesian and Frequentist statistics and I think I know enough to say that it would be difficult to give a “simple” and satisfying example. The reason is that if one is dealing with finite dimensional statistical models (this is where the parameter space of the model is finite) and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors). The covers cases where we consider finite outcomes such as a toss of a coin or rolling a die.
I apologize if that’s too much jargon, but for really simple models that are easy to specify you tend to get the same answer. Bayesian stats starts to behave different than frequentist statistics in noticeable ways when you consider infinite outcome spaces. An example here might be where you are considering probability distributions over curves (this arises in my research on speech recognition). In this case even if you have a seemingly sensible prior you can end up in the case where, in the limit of infinite data, you will end up with a posterior distribution that is different from the true distribution.
In practice if I am learning a Gaussian Mixture Model for speech curves and I don’t have much data then Bayesian procedures tend to be a bit more robust and frequentist procedures end up over-fitting (or being somewhat random). When I start getting more data using frequentist methods tend to be algorithmically more tractable and get better results. So I’ll end with faster computation time and say on the task of phoneme recognition I’ll make fewer errors.
I’m sorry if I haven’t explained it well, the difference in performance wasn’t really evident to me until I spent some time actually using them in machine learning. Unfortunately, most of the disadvantage of Bayesian approaches aren’t evident for simple statistical problems, but they become all too evident in the case of complex statistical models.
and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors)
What do “non-zero weight” and “improper priors” mean?
EDIT: Improper priors mean priors that don’t sum to one. I would guess “non-zero weight” means “non-zero probability”. But then I would wonder why anyone would introduce the term “weight”. Perhaps “weight” is the term you use to express a value from a probability density function that is not itself a probability.
Improper priors are generally only considered in the case of continuous distributions so ‘sum’ is probably not the right term, integrate is usually used.
I used the term ‘weight’ to signify an integral because of how I usually intuit probability measures. Say you have a random variable X that takes values in the real line, the probability that it takes a value in some subset S of the real line would be the integral of S with respect to the given probability measure.
I’ve had some training in Bayesian and Frequentist statistics and I think I know enough to say that it would be difficult to give a “simple” and satisfying example. The reason is that if one is dealing with finite dimensional statistical models (this is where the parameter space of the model is finite) and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors). The covers cases where we consider finite outcomes such as a toss of a coin or rolling a die.
I apologize if that’s too much jargon, but for really simple models that are easy to specify you tend to get the same answer. Bayesian stats starts to behave different than frequentist statistics in noticeable ways when you consider infinite outcome spaces. An example here might be where you are considering probability distributions over curves (this arises in my research on speech recognition). In this case even if you have a seemingly sensible prior you can end up in the case where, in the limit of infinite data, you will end up with a posterior distribution that is different from the true distribution.
In practice if I am learning a Gaussian Mixture Model for speech curves and I don’t have much data then Bayesian procedures tend to be a bit more robust and frequentist procedures end up over-fitting (or being somewhat random). When I start getting more data using frequentist methods tend to be algorithmically more tractable and get better results. So I’ll end with faster computation time and say on the task of phoneme recognition I’ll make fewer errors.
I’m sorry if I haven’t explained it well, the difference in performance wasn’t really evident to me until I spent some time actually using them in machine learning. Unfortunately, most of the disadvantage of Bayesian approaches aren’t evident for simple statistical problems, but they become all too evident in the case of complex statistical models.
Thanks much!
What do “non-zero weight” and “improper priors” mean?
EDIT: Improper priors mean priors that don’t sum to one. I would guess “non-zero weight” means “non-zero probability”. But then I would wonder why anyone would introduce the term “weight”. Perhaps “weight” is the term you use to express a value from a probability density function that is not itself a probability.
No problem.
Improper priors are generally only considered in the case of continuous distributions so ‘sum’ is probably not the right term, integrate is usually used.
I used the term ‘weight’ to signify an integral because of how I usually intuit probability measures. Say you have a random variable X that takes values in the real line, the probability that it takes a value in some subset S of the real line would be the integral of S with respect to the given probability measure.
There’s a good discussion of this way of viewing probability distributions in the wikipedia article. There’s also a fantastic textbook on the subject that really has made a world of difference for me mathematically.