The reason nobody else talks about the A_p distribution is because the same concept appears in standard probability expositions as a random variable representing an unknown probability. For example, if you look in Hoff’s “A First Course in Bayesian Statistics”, it will discuss the “binomial model” with an unknown “parameter” Θ. The “event” Θ=p plays the same role as the proposition A_p, since P(Y=1|Θ=p) = p. I think Jaynes does have something to add, but not so much in the A_p distribution chapter as in his chapter on the physics of coin flips, and his analysis of die rolls which I’m not sure if is in the book. He gets you out of the standard Bayesian stats mindset where reality is a binomial model or multinomial model or whatever, and shows you that A_p can actually have a meaning in terms of a physical model, such as a disjunction of die shapes that lead to the same probability of getting 6. Although your way of thinking of it as a limiting posterior probability from a certain kind of evidence is interesting too (or Jaynes’s way of thinking of it, if it was in the book; I don’t recall). Anyway, I wrote a post on this that didn’t get much karma, maybe you’ll be one of the few people that’s interested.
Thanks for the reference. You and other commentator both seem to be saying the same thing: that the there isn’t much use case for the Ap distribution as Bayesian statisticians have other frameworks for thinking about these sorts of problems. It seems important that I acquaint myself with the basic tools of Bayesian statistics to better contextualize Jaynes’ contribution.
Sort of. I think the distribution of Θ is the Ap distribution, since it satisfies that formula; Θ=p is Ap. It’s just that Jaynes prefers an exposition modeled on propositional logic, whereas a standard probability textbook begins with the definition of “random variables” like Θ, but this seems to me just a notational difference, since an equation like Θ=p is after all a proposition from the perspective of propositional logic. So I would rather say that Bayesian statisticians are in fact using it, and I was just explaining why you don’t find any exposition of it under that name. I don’t think there’s a real conceptual difference. Jaynes of course would object to the word “random” in “random variable” but it’s just a word, in my post I call it an “unknown quantity” and mathematically define it the usual way.
The reason nobody else talks about the A_p distribution is because the same concept appears in standard probability expositions as a random variable representing an unknown probability. For example, if you look in Hoff’s “A First Course in Bayesian Statistics”, it will discuss the “binomial model” with an unknown “parameter” Θ. The “event” Θ=p plays the same role as the proposition A_p, since P(Y=1|Θ=p) = p. I think Jaynes does have something to add, but not so much in the A_p distribution chapter as in his chapter on the physics of coin flips, and his analysis of die rolls which I’m not sure if is in the book. He gets you out of the standard Bayesian stats mindset where reality is a binomial model or multinomial model or whatever, and shows you that A_p can actually have a meaning in terms of a physical model, such as a disjunction of die shapes that lead to the same probability of getting 6. Although your way of thinking of it as a limiting posterior probability from a certain kind of evidence is interesting too (or Jaynes’s way of thinking of it, if it was in the book; I don’t recall). Anyway, I wrote a post on this that didn’t get much karma, maybe you’ll be one of the few people that’s interested.
Thanks for the reference. You and other commentator both seem to be saying the same thing: that the there isn’t much use case for the Ap distribution as Bayesian statisticians have other frameworks for thinking about these sorts of problems. It seems important that I acquaint myself with the basic tools of Bayesian statistics to better contextualize Jaynes’ contribution.
Sort of. I think the distribution of Θ is the Ap distribution, since it satisfies that formula; Θ=p is Ap. It’s just that Jaynes prefers an exposition modeled on propositional logic, whereas a standard probability textbook begins with the definition of “random variables” like Θ, but this seems to me just a notational difference, since an equation like Θ=p is after all a proposition from the perspective of propositional logic. So I would rather say that Bayesian statisticians are in fact using it, and I was just explaining why you don’t find any exposition of it under that name. I don’t think there’s a real conceptual difference. Jaynes of course would object to the word “random” in “random variable” but it’s just a word, in my post I call it an “unknown quantity” and mathematically define it the usual way.