I am trying to understand the examples on that page, but they seem strange; shouldn’t there be a model with parameters, and a prior distribution for those parameters? I don’t understand the inferences. Can someone explain?
Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around −1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.
I am trying to understand the examples on that page, but they seem strange; shouldn’t there be a model with parameters, and a prior distribution for those parameters? I don’t understand the inferences. Can someone explain?
Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around −1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.