I assumed the background information to be indifferent to the A_p’s
We do not explicitly talk about the nature of the A_p’s. Prof. Jaynes defines it as a proposition such that P(A|A_p, E) = p. In my example A_p is defined as a proposition such that P(H|A_p, I) = p. No matter what prior information we have, it is going to be indifferent to the A_p’s by virtue of the fact that we don’t know what A_p represents
Isn’t A_p the distribution over how often the coin will come up heads, or the probability of life on Mars? If so… there’s no way those things could be indifferent to the background information. A core tenet of the philosophy outlined in this book is that when you ignore prior information without good cause, things get wacky and fall apart. This is part of desiderata iii from chapter 2: “The robot always takes into account all of the evidence it has relevant to a question. It does not arbitrarily ignore some of the information, basing its conclusions only on what remains.”
(Then Jaynes ignores information in later chapters because it doesn’t change the result… so this desideratum is easier said than done… but yeah)
“[…] A_p the distribution over how often the coin will come up heads […]”—I understood A_p to be a sort of distribution over models; we do not know/talk about the model itself but we know that if a model A_p is true, then the probability of heads is equal to p by definition of A_p. Perhaps the model A_p is the proposition “the centre of mass of the coin is at p” or “the bias-weighting of the coin is p” but we do not care as long the resulting probability of heads is p. So how can the prior not be indifferent when we do not know the nature of each proposition A_p in a set of mutually exclusive and exhaustive propositions?
I can’t see anything wrong in what you’ve said there, but I still have to insist without good argument that dropping P(A_p|I) is incorrect. In my vague defense, consider the two A_p distributions drawn on p558, for the penny and for Mars. Those distributions are as different as they are because of the different prior information. If it was correct to drop the prior term a priori, I think those distributions would look the same?
You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
I dropped the prior for two reason:
I assumed the background information to be indifferent to the A_p’s
We do not explicitly talk about the nature of the A_p’s. Prof. Jaynes defines it as a proposition such that P(A|A_p, E) = p. In my example A_p is defined as a proposition such that P(H|A_p, I) = p. No matter what prior information we have, it is going to be indifferent to the A_p’s by virtue of the fact that we don’t know what A_p represents
Is this justification valid?
Isn’t A_p the distribution over how often the coin will come up heads, or the probability of life on Mars? If so… there’s no way those things could be indifferent to the background information. A core tenet of the philosophy outlined in this book is that when you ignore prior information without good cause, things get wacky and fall apart. This is part of desiderata iii from chapter 2: “The robot always takes into account all of the evidence it has relevant to a question. It does not arbitrarily ignore some of the information, basing its conclusions only on what remains.”
(Then Jaynes ignores information in later chapters because it doesn’t change the result… so this desideratum is easier said than done… but yeah)
“[…] A_p the distribution over how often the coin will come up heads […]”—I understood A_p to be a sort of distribution over models; we do not know/talk about the model itself but we know that if a model A_p is true, then the probability of heads is equal to p by definition of A_p. Perhaps the model A_p is the proposition “the centre of mass of the coin is at p” or “the bias-weighting of the coin is p” but we do not care as long the resulting probability of heads is p. So how can the prior not be indifferent when we do not know the nature of each proposition A_p in a set of mutually exclusive and exhaustive propositions?
I can’t see anything wrong in what you’ve said there, but I still have to insist without good argument that dropping P(A_p|I) is incorrect. In my vague defense, consider the two A_p distributions drawn on p558, for the penny and for Mars. Those distributions are as different as they are because of the different prior information. If it was correct to drop the prior term a priori, I think those distributions would look the same?
You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
I looooove that coin flip section! Cheers