“[…] A_p the distribution over how often the coin will come up heads […]”—I understood A_p to be a sort of distribution over models; we do not know/talk about the model itself but we know that if a model A_p is true, then the probability of heads is equal to p by definition of A_p. Perhaps the model A_p is the proposition “the centre of mass of the coin is at p” or “the bias-weighting of the coin is p” but we do not care as long the resulting probability of heads is p. So how can the prior not be indifferent when we do not know the nature of each proposition A_p in a set of mutually exclusive and exhaustive propositions?
I can’t see anything wrong in what you’ve said there, but I still have to insist without good argument that dropping P(A_p|I) is incorrect. In my vague defense, consider the two A_p distributions drawn on p558, for the penny and for Mars. Those distributions are as different as they are because of the different prior information. If it was correct to drop the prior term a priori, I think those distributions would look the same?
You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
“[…] A_p the distribution over how often the coin will come up heads […]”—I understood A_p to be a sort of distribution over models; we do not know/talk about the model itself but we know that if a model A_p is true, then the probability of heads is equal to p by definition of A_p. Perhaps the model A_p is the proposition “the centre of mass of the coin is at p” or “the bias-weighting of the coin is p” but we do not care as long the resulting probability of heads is p. So how can the prior not be indifferent when we do not know the nature of each proposition A_p in a set of mutually exclusive and exhaustive propositions?
I can’t see anything wrong in what you’ve said there, but I still have to insist without good argument that dropping P(A_p|I) is incorrect. In my vague defense, consider the two A_p distributions drawn on p558, for the penny and for Mars. Those distributions are as different as they are because of the different prior information. If it was correct to drop the prior term a priori, I think those distributions would look the same?
You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
I looooove that coin flip section! Cheers