You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
You are right; dropping priors in the A_p distribution is probably not a general rule. Perhaps the propositions don’t always need to interpretable for us to be able impose priors? For example, people impose priors over the parameter space of a neural network which is certainly not interpretable. But the topic of Bayesian neural networks is beyond me
It seems like in practice, when there’s a lot of data, people like Jaynes and Gelman are happy to assign low-information (or “uninformative”) priors, knowing that with a lot of data the prior ends up getting washed away anyway. So just slapping a uniform prior down might be OK in a lot of real-world situations. This is I think pretty different than just dropping the prior completely, but gets the same job done.
Now I’m doubting myself >_> is it pretty different?? Anyone lurking reading this who knows whether uniform prior is very different than just dropping the prior term?
I believe it is the same thing. A uniform prior means your prior is constant function i.e. P(A_p|I) = x where x is a real number with the usual caveats. So if you have a uniform prior, you can drop it (from a safe height of course). But perhaps the more seasoned Bayesians disagree? (where are they when you need them)
Shoot! You’re right! I think I was wrong this whole time on the impact of dropping the prior term. Cuz data term * prior term is like multiplying the distributions, and dropping the prior term is like multiplying the data distribution by the uniform one. Thanks for sticking with me :)
No worries :) Thanks a lot for your help! Much appreciated.
It’s amazing how complex a simple coin flipping problem can get when we approach it from our paradigm of objective Bayesianism. Professor Jaynes remarks on this after deriving the principle of indifference: “At this point, depending on your personality and background in this subject, you will be either greatly impressed or greatly disappointed by the result (2.91).”—page 40
A frequentist would have “solved“ this problem rather easily. Personally, I would trade simplicity for coherence any day of the week...
I looooove that coin flip section! Cheers