A Bayesian does not have the option of ‘just skipping that step’ and choosing to accept whichever prior was mandated by Fisher
Why in the world doesn’t a Bayesian have that option? I thought you were a free people. :-) How’d you decide to reject those priors in favor of other ones, anyway? As far as I currently understand, there’s no universally accepted mathematical way to pick the best prior for every given problem and no psychologically coherent way to pick it of your head either, because it ain’t there. In addition to that, here’s some anecdotal evidence: I never ever heard of a Bayesian agent accepting or rejecting a prior.
That was a partial quote and partial paraphrase of the claim made by cousin_it (hang on, that’s you! huh?). I thought that the “we are a free people and can use the frequentist implicit priors whenever they happen to be the best available” claim had been made more than enough times so I left off that nitpick and focussed on my core gripe with the post in question. That is, the suggestion that using priors because tradition tells you to makes them less ‘bullshit’.
I think your inclusion of ‘just’ alows for the possibility that off all possible configurations of prior probabilities the frequentist one so happens to be the one worth choosing.
I never ever heard of a Bayesian agent accepting or rejecting a prior.
I’m confused. What do you mean by accepting or rejecting a prior?
Funny as it is, I don’t contradict myself. A Bayesian doesn’t have the option of skipping the prior altogether, but does have the option of picking priors with frequentist justifications, which option you call “bullshit”, though for the life of me I can’t tell how you can tell.
Frequentists have valid reasons for their procedures besides tradition: the procedures can be shown to always work, in a certain sense. On the other hand, I know of no Bayesian-prior-generating procedure that can be shown to work in this sense or any other sense.
I’m confused. What do you mean by accepting or rejecting a prior?
Some priors are very bad. If a Bayesian somehow ends up with such a prior, they’re SOL because they have no notion of rejecting priors.
Some priors are very bad. If a Bayesian somehow ends up with such a prior, they’re SOL because they have no notion of rejecting priors.
There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi’s hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule.
Bayesians cannot reject a prior of 0. They can ‘reject’ a prior of “That’s definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit.” They just wouldn’t write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.
I am trying to understand the examples on that page, but they seem strange; shouldn’t there be a model with parameters, and a prior distribution for those parameters? I don’t understand the inferences. Can someone explain?
Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around −1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.
Why in the world doesn’t a Bayesian have that option? I thought you were a free people. :-) How’d you decide to reject those priors in favor of other ones, anyway? As far as I currently understand, there’s no universally accepted mathematical way to pick the best prior for every given problem and no psychologically coherent way to pick it of your head either, because it ain’t there. In addition to that, here’s some anecdotal evidence: I never ever heard of a Bayesian agent accepting or rejecting a prior.
That was a partial quote and partial paraphrase of the claim made by cousin_it (hang on, that’s you! huh?). I thought that the “we are a free people and can use the frequentist implicit priors whenever they happen to be the best available” claim had been made more than enough times so I left off that nitpick and focussed on my core gripe with the post in question. That is, the suggestion that using priors because tradition tells you to makes them less ‘bullshit’.
I think your inclusion of ‘just’ alows for the possibility that off all possible configurations of prior probabilities the frequentist one so happens to be the one worth choosing.
I’m confused. What do you mean by accepting or rejecting a prior?
Funny as it is, I don’t contradict myself. A Bayesian doesn’t have the option of skipping the prior altogether, but does have the option of picking priors with frequentist justifications, which option you call “bullshit”, though for the life of me I can’t tell how you can tell.
Frequentists have valid reasons for their procedures besides tradition: the procedures can be shown to always work, in a certain sense. On the other hand, I know of no Bayesian-prior-generating procedure that can be shown to work in this sense or any other sense.
Some priors are very bad. If a Bayesian somehow ends up with such a prior, they’re SOL because they have no notion of rejecting priors.
There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi’s hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule.
Bayesians cannot reject a prior of 0. They can ‘reject’ a prior of “That’s definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit.” They just wouldn’t write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.
I am trying to understand the examples on that page, but they seem strange; shouldn’t there be a model with parameters, and a prior distribution for those parameters? I don’t understand the inferences. Can someone explain?
Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around −1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.