I don’t know the theory itself, but from your description it seems likely that it is a simple ease of thinking thing. ‘What should I believe is the likelihood that the result of a coinflip is heads?’ isn’t any different in meaning than ‘estimating the probability of heads from data’ or ‘how plausible is heads?’ as far as our actions go. We have formal ways of doing the middle of the three easily, so it is easier to think of that way, and we have built up intuitions about coinflips that require it.
Whether or not it is a physical property, it is easier to describe properties of individual things rather than of large combinations of things and actions. If his description of how the evidence should be weighed includes large parts of his theory, it could still be a valuable example.
I believe, mathematically, your claim can be expressed as:
P(H|D) = argmaxθP(θ|D)
where θ is the ”probability“ parameter of the Bernoulli distribution, H represents the the proposition that heads occurs, and D represents our data. The left side of this equation is the plausibility based on knowledge and the right side is Professor Jaynes’ ‘estimate of the probability’ . How can we prove this statement?
Edit:
Latex is being a nuisance as usual :) The right side of the equation is the argmax with respect to theta of P(theta | data)
I think argmax is not the way to go as the beta distribution and binomial likelihood is only symmetric when the coin is fair, if you want a point estimate the mean of the distribution is better, which will always be closer to 50⁄50 than the mode, and thus more conservative, you are essentially ignoring all the uncertainty of theta and thus overestimating the probability.
What is the theoretical justification behind taking the mean? Argmax feels more intuitive for me because it is literally “the most plausible value of theta”. In either case, whether we use argmax or mean, can we prove that it is equal to P(H|D)?
If I have a distribution of 2 kids and a professional boxer, and a random one is going to hit me, then argmax tells me that I will always be hit by a kids, sure if you draw from the distribution only once then argmax will beat the mean in 2⁄3 of the cases, but its much worse at answering what will happen if I draw 9 hits (argmax=nothing, mean=3hits from a boxer)
This distribution is skewed, like the beta distribution, and is therefore better summarized by the mean than the mode.
In Bayesian statistics argmax on sigma will often lead to sigma=0, if you assume that sigma follows a exponential distribution, thus it will lead you to assume that there is no variance in your sample
The variance is also lower around the mean than the mode if that counts as a theoretical justification :)
Here is how we evil subjective Bayesian think about it
Prior:
Lets imagine two people, Janes and an Alien, Janes knows that most coins are fair and has a Beta(20, 20) prior, the alien does not know this, and puts the ‘objective’ Beta(1, 1) prior which is uniform for all frequencies.
Data:
The data comes up 12 heads and 8 tails
Posterior:
Janes has a narrow posterior Beta(32, 28) and the alien a broader Beta(13, 9), Janes posterior is also close to 50⁄50
if Janes does not have access to the data that formed his prior or cannot explain it well, then what he believes about the coin and what the alien believes about the coin are both ‘rational’, as it is the posterior from their personal priors and the shared data.
How to think about it:
Janes can publish a paper with the Beta(13, 9) posterior, because that is what skeptical people with weak priors will believe, while himself believing in a Beta(32, 28)
To make it more concrete Pfizer used a Beta(0.7, 1) prior for their COVID vaccine, but had they truly belied that prior they would have gone back to the drawing instead of starting a phase 3 trial, but the FDA is like the alien in the above example, with a very broad prior allowing most outcome, the Pfizers scientists are like Janes, we have all this data suggesting it should work pretty well so they may have believed in Beta(5, 15) or whatever
The other thing to notice is the coins frequency is a distribution and not an scalar because they are both unsure about the ‘real’ frequency
I am very grateful for your answer but I have a few contentions from my paradigm of objective Bayesianism
You have replaced probability with a physical property: “frequency“. I have also seen other people use terms like bias-weighting, fairness, center of mass, etc. which are all properties of the coin, to sidestep this question. I have nothing against theta being a physical property such that P(heads|theta=alpha) = alpha. In fact, it would make a ton of sense to me if this actually were the case. But the issue is when people say that theta is a probability and treat it as if it was a physical property. I presume you don’t view probabilities to be physical properties. Even subjective Bayesians are not that evil...
“if Janes does not have access to the data that formed his prior or cannot explain it well, then what he believes about the coin and what the alien believes about the coin are both ‘rational’, as it is the posterior from their personal priors and the shared data.” If Professor Jaynes did not have access to the data that formed his prior, his prior would have been the same as the alien’s and they would have ended up with the same posterior. There is no such thing as a “personal prior”. I invite you to the light side: read Professor Jaynes’ book; it is absolutely brilliant
I may be to bad at philosophy to give a satisfying answer, and it may turn out that I actually do not know and am simply to dumb to realize that I should be confused about this :)
There is a frequency of the coin in the real world, let’s say it has θ=0.5
Because I am not omniscient there is a distribution over θ it’s parameterized by some prior which we ignore (let’s not fight about that :)) and some data x, thus In my head there exists a probability distribution p(θ∣x)
The probability distribution on my head is a distribution not a scaler, I don’t know what θ is but I may be 95% certain that it’s between 0.4 and 0.6
I think there are problems with objective priors, but I am honored to have meet an objective Bayesian in the wild, so I would love to try to understand you, I am Jan Christian Refsgaard on the University of Bayes and Bayesian conspiracy discord servers. My main critique is the ‘in-variance’ of some priors under some transformations, but that is a very weak critique and my epistemology is very underdeveloped, also I just bought Jaynes book :) and will read when I find a study group, so who knows maybe I will be an objective Bayesian a year from now :)
Response to point one: I do find that to be satisfactory from a philosophical perspective but only because theta refers to a real-world property called frequency and not the probability of heads. My question to you is this: if you have a point estimate of theta or if you find the exact real world-value of theta (perhaps by measuring it with an ACME frequency-o-meter), what does it tell you about the probability of heads?
Response to point two: The honour is mine :) If you ever create a study group or discord server for the book, then please count me in
In Bayesian statistics there are two distributions which I think we are conflating here because they happen to have the same value
The posterior p(θ∣y) describes our uncertainty of θ, given data (and prior information), so it’s how sure we are of the frequency of the coin
The posterior predictive is our prediction for new coin flips ~y given old coin flips y
p(~y|y)=∫Θp(~y∣θ,y)p(θ∣y)dθ
For the simple Bernoulli distribution coin example, the following issue arise: the parameter θ, the posterior predictive and the posterior all have the same value, but they are different things.
Here is an example were they are different:
Here θ was not a coin but the logistic intercept of some binary outcome with predictor variable x, let’s imagine an evil Nazi scientist poisoning people, then we could make a logistic model of y (alive/dead) such as y=invlogit(ax+logit(θ)), Let’s imagine that x is how much poison you ate above/below the average poison level, and that we have θ=0.5, so on average half died
Now we have:
the value if we were omniscient
θ=0.5
The posterior of θ because we are not omniscient there is error
p(θ|y)=0.5+ϵ
Predictions for two different y with uncertainty:
p(~ylots of poison∣y)=p(~y∣y,~x=2)=0.99±ϵ≈0.99p(~yaverage poison∣y)=p(~y∣y,~x=0)=0.5±ϵ≈0.5
Does this help?
I will PM you when we start reading Jaynes, we are currently reading Regression and other stories, but in about 20 weeks (done if we do 1 chapter per week) there is a good chance we will do Jaynes
To calculate the posterior predictive you need to calculate the posterior and to the calculate posterior you need to calculate the likelihood (in most problems). For the coin flipping example, what is the probability of heads and what is the probability of tails given that the frequency is equal to some value theta? You might accuse me of being completely devoid of intuition for asking this question but please bear with me...
Sounds good. I thought nobody was interested in reading Professor Jaynes’ book anymore. It’s a shame more people don’t know about him
Given 1. your model and 2 the magical no uncertainty in theta, then it’s theta, the posterior predictive allows us to jump from infrence about parameters to infence about new data, it’s a distribution of y (coin flip outcomes) not theta (which describes the frequency)
Think I have finally got it. I would like to thank you once again for all your help; I really appreciate it.
This is what I think “estimating the probability” means:
We define theta to be a real-world/objective/physical quantity s.t. P(H|theta=alpha) = alpha & P(T|theta=alpha) = 1 - alpha. We do not talk about the nature of this quantity theta because we do not care what it is. I don’t think it is appropriate to say that theta is “frequency” for this reason:
“frequency” is not a well-defined physical quantity. You can’t measure “frequency” like you measure temperature.
But we do not need to dispute about this as theta being “frequency” is unnecessary.
Using the above definitions, we can compute the likelihood and then the posterior and then the posterior predictive which is represents the probability of heads in the next flip given data from previous flips.
Is the above accurate?
So Bayesians who say that theta is the probability of heads and compute a point estimate of the parameter theta and say that they have “estimated the probability” are just frequentists in disguise?
But the leap from theta to probability of heads I think is an intuitive leap that happens to be correct but unjustified.
Philosophically then the posterior predictive is actually frequents, allow me to explain: Frequents are people who estimates a parameter and then draws fake samples from that point estimate and summarize it in confidence intervals, to justify this they imagine parallel worlds and what not.
Bayesian are people who assumes a prior distributions from which the parameter is drawn, they thus have both prior and likelihood uncertainty which gives posterior uncertainty, which is the uncertainty of the parameters in their model, when a Bayesian wants to use his model to make predictions then they integrate their model parameters out and thus have a predictive distribution of new data given data*. Because this is a distribution of the data like the Frequentists sampling function, then we can actually draw from it multiple times to compute summary statistics much like the frequents, and calculate things such as a “Bayesian P-value” which describes how likely the model is to have generated our data, here the goal is for the p-value to be high because that suggests that the model describes the data well.
*In the real world they do not integrate out theta, they draw it 10.000 times and use thous samples as a stand in distribution because the math is to hard for complex models
Excellent! One final point that I would like to add is if we say that “theta is a physical quantity s.t. [...]“, we are faced with an ontological question: “does a physical quantity exist with these properties?”.
I recently found about Professor Jaynes’ A_p distribution idea, it is introduced in chapter 18 of his book, from Maxwell Peterson in the sub-thread below and I believe it is an elegant workaround to this problem. It leads to the same results but is more satisfying philosophically.
This is how it would work in the coin flipping example:
Define A(u) to a function that maps from real numbers to propositions with domain [0, 1] s.t. 1. The set of propositions {A(u): 0 ⇐ u ⇐ 1} is mutually exclusive and exhaustive
2. P(y=1 | A(u)) = u and P(y=0 | A(u)) = 1 - u
Because the set of propositions is mutually exclusive and exhaustive, there is one u s.t. A(u) is true and for any v != u, A(v) is false. We call this unique value of u: theta.
It follows that P(y=1 | theta) = theta and P(y=0 | theta) = 1 - theta and we use this to calculate the posterior predictive distribution
Regarding reading Jaynes, my understanding is its good for intuition but bad for applied statistics because it does not teach you modern bayesian stuff such as WAIC and HMC, so you should first do one of the applied books. I also think Janes has nothing about causality.
I‘m afraid I have to disagree. I do sometimes regret not focusing more on applied Bayesian inference. (In fact, I have no idea what WAIC or HMC is.) But in my defence, I am an amateur analytical-philosopher & logician and I couldn’t help finding more non-sequiturs in classical expositions of probability theory than plot-holes in Tolkien novels. Perhaps if had been more naive and less critical (no offence to anyone) when I read those books, I would have “progressed” faster. I had lost hope in understanding probability theory before I read Professor Jaynes’ book; that’s why I respect the man so much. Now I have the intuition but I am still trying to reconcile it with what I read in the applied literature. I sometimes find it frustrating that I am worrying about the philosophical nuances and intricacies of probability theory while others are applying their (perhaps less coherent) understanding of it to solve problems but I strongly believe it is worth it :)
I am one of those people with an half baked epistemology and understanding of probability theory, and I am looking forward to reading Janes.
And I agree there are a lot of ad hocisms in probability theory which means everything is wrong in the logic sense as some of the assumptions are broken, but a solid moden bayesian approach has much less adhocisms and also teaches you to build advanced models in less than 400 pages.
HMC is a sampling approach to solving the posterior which in practice is superior to analytical methods, because it actually accounts for correlations in predictors and other things which are usually assumed away.
WAIC is information theory on distributions which allows you to say that model A is better than model B because the extra parameters in B are fitting noice, basically minimum description length on steroids for out of sample uncertainty.
Also I studied biology which is the worst, I can perform experiments and thus do not have to think about causality and I do not expect my model to acout for half of the signal even if it’s ‘correct’
Appreciate your reply. I think the source of my confusion is there being uncertainty in the degree of plausibility that weassign given our knowledge or there being uncertainty in our degree of belief given our knowledge. This feels a bit unnatural to me because this quantity is not an external/physical and unknown quantity but one that we assign given our knowledge. If we were to think of probabilities as physical properties that are unknown, then it makes sense to me that there can uncertainty in its value. How would you reconcile this?
Uncertainty is a statement about my brain not the real world, if you replicate the initial conditions then it will always land either Head or Tails, so even if the coin is “fair” p(H∣θ)=0.5, then maybe p(H∣θ,very good at physics)=0.95. the uncertainty comes form be being stupid and thus being unable to predict the next coin toss.
Also there are two things we are uncertain about, we are uncertain about θ (the coins frequency) and we are uncertain about p(H∣θ), the next coin toss
So you are saying that “we” are uncertain about the degree of belief/plausibility that what our brain is going to assign? Then who are “we” exactly? Apologies for being glib but I really don’t understand
Also, it is a crime to have different priors given the same information according to us objective Bayesians so that can’t be the issue
I don’t know the theory itself, but from your description it seems likely that it is a simple ease of thinking thing. ‘What should I believe is the likelihood that the result of a coinflip is heads?’ isn’t any different in meaning than ‘estimating the probability of heads from data’ or ‘how plausible is heads?’ as far as our actions go. We have formal ways of doing the middle of the three easily, so it is easier to think of that way, and we have built up intuitions about coinflips that require it.
Whether or not it is a physical property, it is easier to describe properties of individual things rather than of large combinations of things and actions. If his description of how the evidence should be weighed includes large parts of his theory, it could still be a valuable example.
That’s my take as well. “estimating the probability” really means “calculating the plausibility based on this knowledge”.
I believe, mathematically, your claim can be expressed as:
P(H|D) = argmaxθP(θ|D)
where θ is the ”probability“ parameter of the Bernoulli distribution, H represents the the proposition that heads occurs, and D represents our data. The left side of this equation is the plausibility based on knowledge and the right side is Professor Jaynes’ ‘estimate of the probability’ . How can we prove this statement?
Edit:
Latex is being a nuisance as usual :) The right side of the equation is the argmax with respect to theta of P(theta | data)
I think argmax is not the way to go as the beta distribution and binomial likelihood is only symmetric when the coin is fair, if you want a point estimate the mean of the distribution is better, which will always be closer to 50⁄50 than the mode, and thus more conservative, you are essentially ignoring all the uncertainty of theta and thus overestimating the probability.
What is the theoretical justification behind taking the mean? Argmax feels more intuitive for me because it is literally “the most plausible value of theta”. In either case, whether we use argmax or mean, can we prove that it is equal to P(H|D)?
If I have a distribution of 2 kids and a professional boxer, and a random one is going to hit me, then argmax tells me that I will always be hit by a kids, sure if you draw from the distribution only once then argmax will beat the mean in 2⁄3 of the cases, but its much worse at answering what will happen if I draw 9 hits (argmax=nothing, mean=3hits from a boxer)
This distribution is skewed, like the beta distribution, and is therefore better summarized by the mean than the mode.
In Bayesian statistics argmax on sigma will often lead to sigma=0, if you assume that sigma follows a exponential distribution, thus it will lead you to assume that there is no variance in your sample
The variance is also lower around the mean than the mode if that counts as a theoretical justification :)
Disclaimer: Subjective Bayesian
Here is how we evil subjective Bayesian think about it
Prior:
Lets imagine two people, Janes and an Alien, Janes knows that most coins are fair and has a Beta(20, 20) prior, the alien does not know this, and puts the ‘objective’ Beta(1, 1) prior which is uniform for all frequencies.
Data:
The data comes up 12 heads and 8 tails
Posterior:
Janes has a narrow posterior Beta(32, 28) and the alien a broader Beta(13, 9), Janes posterior is also close to 50⁄50
if Janes does not have access to the data that formed his prior or cannot explain it well, then what he believes about the coin and what the alien believes about the coin are both ‘rational’, as it is the posterior from their personal priors and the shared data.
How to think about it:
Janes can publish a paper with the Beta(13, 9) posterior, because that is what skeptical people with weak priors will believe, while himself believing in a Beta(32, 28)
To make it more concrete Pfizer used a Beta(0.7, 1) prior for their COVID vaccine, but had they truly belied that prior they would have gone back to the drawing instead of starting a phase 3 trial, but the FDA is like the alien in the above example, with a very broad prior allowing most outcome, the Pfizers scientists are like Janes, we have all this data suggesting it should work pretty well so they may have believed in Beta(5, 15) or whatever
The other thing to notice is the coins frequency is a distribution and not an scalar because they are both unsure about the ‘real’ frequency
Does this help or am I way off?
I am very grateful for your answer but I have a few contentions from my paradigm of objective Bayesianism
You have replaced probability with a physical property: “frequency“. I have also seen other people use terms like bias-weighting, fairness, center of mass, etc. which are all properties of the coin, to sidestep this question. I have nothing against theta being a physical property such that P(heads|theta=alpha) = alpha. In fact, it would make a ton of sense to me if this actually were the case. But the issue is when people say that theta is a probability and treat it as if it was a physical property. I presume you don’t view probabilities to be physical properties. Even subjective Bayesians are not that evil...
“if Janes does not have access to the data that formed his prior or cannot explain it well, then what he believes about the coin and what the alien believes about the coin are both ‘rational’, as it is the posterior from their personal priors and the shared data.” If Professor Jaynes did not have access to the data that formed his prior, his prior would have been the same as the alien’s and they would have ended up with the same posterior. There is no such thing as a “personal prior”. I invite you to the light side: read Professor Jaynes’ book; it is absolutely brilliant
I may be to bad at philosophy to give a satisfying answer, and it may turn out that I actually do not know and am simply to dumb to realize that I should be confused about this :)
There is a frequency of the coin in the real world, let’s say it has θ=0.5
Because I am not omniscient there is a distribution over θ it’s parameterized by some prior which we ignore (let’s not fight about that :)) and some data x, thus In my head there exists a probability distribution p(θ∣x)
The probability distribution on my head is a distribution not a scaler, I don’t know what θ is but I may be 95% certain that it’s between 0.4 and 0.6
I think there are problems with objective priors, but I am honored to have meet an objective Bayesian in the wild, so I would love to try to understand you, I am Jan Christian Refsgaard on the University of Bayes and Bayesian conspiracy discord servers. My main critique is the ‘in-variance’ of some priors under some transformations, but that is a very weak critique and my epistemology is very underdeveloped, also I just bought Jaynes book :) and will read when I find a study group, so who knows maybe I will be an objective Bayesian a year from now :)
Response to point one: I do find that to be satisfactory from a philosophical perspective but only because theta refers to a real-world property called frequency and not the probability of heads. My question to you is this: if you have a point estimate of theta or if you find the exact real world-value of theta (perhaps by measuring it with an ACME frequency-o-meter), what does it tell you about the probability of heads?
Response to point two: The honour is mine :) If you ever create a study group or discord server for the book, then please count me in
In Bayesian statistics there are two distributions which I think we are conflating here because they happen to have the same value
The posterior p(θ∣y) describes our uncertainty of θ, given data (and prior information), so it’s how sure we are of the frequency of the coin
The posterior predictive is our prediction for new coin flips ~y given old coin flips y
p(~y|y)=∫Θp(~y∣θ,y)p(θ∣y)dθFor the simple Bernoulli distribution coin example, the following issue arise: the parameter θ, the posterior predictive and the posterior all have the same value, but they are different things.
Here is an example were they are different:
Here θ was not a coin but the logistic intercept of some binary outcome with predictor variable x, let’s imagine an evil Nazi scientist poisoning people, then we could make a logistic model of y (alive/dead) such as y=invlogit(ax+logit(θ)), Let’s imagine that x is how much poison you ate above/below the average poison level, and that we have θ=0.5, so on average half died
Now we have:
the value if we were omniscient
θ=0.5The posterior of θ because we are not omniscient there is error
p(θ|y)=0.5+ϵPredictions for two different y with uncertainty:
p(~ylots of poison∣y)=p(~y∣y,~x=2)=0.99±ϵ≈0.99p(~yaverage poison∣y)=p(~y∣y,~x=0)=0.5±ϵ≈0.5Does this help?
I will PM you when we start reading Jaynes, we are currently reading Regression and other stories, but in about 20 weeks (done if we do 1 chapter per week) there is a good chance we will do Jaynes
To calculate the posterior predictive you need to calculate the posterior and to the calculate posterior you need to calculate the likelihood (in most problems). For the coin flipping example, what is the probability of heads and what is the probability of tails given that the frequency is equal to some value theta? You might accuse me of being completely devoid of intuition for asking this question but please bear with me...
Sounds good. I thought nobody was interested in reading Professor Jaynes’ book anymore. It’s a shame more people don’t know about him
Given 1. your model and 2 the magical no uncertainty in theta, then it’s theta, the posterior predictive allows us to jump from infrence about parameters to infence about new data, it’s a distribution of y (coin flip outcomes) not theta (which describes the frequency)
Think I have finally got it. I would like to thank you once again for all your help; I really appreciate it.
This is what I think “estimating the probability” means:
We define theta to be a real-world/objective/physical quantity s.t. P(H|theta=alpha) = alpha & P(T|theta=alpha) = 1 - alpha. We do not talk about the nature of this quantity theta because we do not care what it is. I don’t think it is appropriate to say that theta is “frequency” for this reason:
“frequency” is not a well-defined physical quantity. You can’t measure “frequency” like you measure temperature.
But we do not need to dispute about this as theta being “frequency” is unnecessary.
Using the above definitions, we can compute the likelihood and then the posterior and then the posterior predictive which is represents the probability of heads in the next flip given data from previous flips.
Is the above accurate?
So Bayesians who say that theta is the probability of heads and compute a point estimate of the parameter theta and say that they have “estimated the probability” are just frequentists in disguise?
I think the above is accurate.
I disagree with the last part, but it has two sources of confusion
Frequentists vs Bayesian is in principle about priors but in practice about about point estimates vs distributions
Good Frequentists use distributions and bad Bayesian use point estimates such as Bayes Factors, a good review is this is https://link.springer.com/article/10.3758/s13423-016-1221-4
But the leap from theta to probability of heads I think is an intuitive leap that happens to be correct but unjustified.
Philosophically then the posterior predictive is actually frequents, allow me to explain:
Frequents are people who estimates a parameter and then draws fake samples from that point estimate and summarize it in confidence intervals, to justify this they imagine parallel worlds and what not.
Bayesian are people who assumes a prior distributions from which the parameter is drawn, they thus have both prior and likelihood uncertainty which gives posterior uncertainty, which is the uncertainty of the parameters in their model, when a Bayesian wants to use his model to make predictions then they integrate their model parameters out and thus have a predictive distribution of new data given data*. Because this is a distribution of the data like the Frequentists sampling function, then we can actually draw from it multiple times to compute summary statistics much like the frequents, and calculate things such as a “Bayesian P-value” which describes how likely the model is to have generated our data, here the goal is for the p-value to be high because that suggests that the model describes the data well.
*In the real world they do not integrate out theta, they draw it 10.000 times and use thous samples as a stand in distribution because the math is to hard for complex models
Excellent! One final point that I would like to add is if we say that “theta is a physical quantity s.t. [...]“, we are faced with an ontological question: “does a physical quantity exist with these properties?”.
I recently found about Professor Jaynes’ A_p distribution idea, it is introduced in chapter 18 of his book, from Maxwell Peterson in the sub-thread below and I believe it is an elegant workaround to this problem. It leads to the same results but is more satisfying philosophically.
This is how it would work in the coin flipping example:
Define A(u) to a function that maps from real numbers to propositions with domain [0, 1] s.t.
1. The set of propositions {A(u): 0 ⇐ u ⇐ 1} is mutually exclusive and exhaustive
2. P(y=1 | A(u)) = u and P(y=0 | A(u)) = 1 - u
Because the set of propositions is mutually exclusive and exhaustive, there is one u s.t. A(u) is true and for any v != u, A(v) is false. We call this unique value of u: theta.
It follows that P(y=1 | theta) = theta and P(y=0 | theta) = 1 - theta and we use this to calculate the posterior predictive distribution
Regarding reading Jaynes, my understanding is its good for intuition but bad for applied statistics because it does not teach you modern bayesian stuff such as WAIC and HMC, so you should first do one of the applied books. I also think Janes has nothing about causality.
I‘m afraid I have to disagree. I do sometimes regret not focusing more on applied Bayesian inference. (In fact, I have no idea what WAIC or HMC is.) But in my defence, I am an amateur analytical-philosopher & logician and I couldn’t help finding more non-sequiturs in classical expositions of probability theory than plot-holes in Tolkien novels. Perhaps if had been more naive and less critical (no offence to anyone) when I read those books, I would have “progressed” faster. I had lost hope in understanding probability theory before I read Professor Jaynes’ book; that’s why I respect the man so much. Now I have the intuition but I am still trying to reconcile it with what I read in the applied literature. I sometimes find it frustrating that I am worrying about the philosophical nuances and intricacies of probability theory while others are applying their (perhaps less coherent) understanding of it to solve problems but I strongly believe it is worth it :)
I am one of those people with an half baked epistemology and understanding of probability theory, and I am looking forward to reading Janes. And I agree there are a lot of ad hocisms in probability theory which means everything is wrong in the logic sense as some of the assumptions are broken, but a solid moden bayesian approach has much less adhocisms and also teaches you to build advanced models in less than 400 pages.
HMC is a sampling approach to solving the posterior which in practice is superior to analytical methods, because it actually accounts for correlations in predictors and other things which are usually assumed away.
WAIC is information theory on distributions which allows you to say that model A is better than model B because the extra parameters in B are fitting noice, basically minimum description length on steroids for out of sample uncertainty.
Also I studied biology which is the worst, I can perform experiments and thus do not have to think about causality and I do not expect my model to acout for half of the signal even if it’s ‘correct’
Appreciate your reply. I think the source of my confusion is there being uncertainty in the degree of plausibility that we assign given our knowledge or there being uncertainty in our degree of belief given our knowledge. This feels a bit unnatural to me because this quantity is not an external/physical and unknown quantity but one that we assign given our knowledge. If we were to think of probabilities as physical properties that are unknown, then it makes sense to me that there can uncertainty in its value. How would you reconcile this?
The probability is an external/physical thing because your brain is physical, but I take your point.
I think the we/our distinction arises because we have different priors
That’s a very misleading way of looking at it.
These subjective Bayesians… :) I feel the same way about that statement. Could you please elaborate?
Uncertainty is a statement about my brain not the real world, if you replicate the initial conditions then it will always land either Head or Tails, so even if the coin is “fair” p(H∣θ)=0.5, then maybe p(H∣θ,very good at physics)=0.95. the uncertainty comes form be being stupid and thus being unable to predict the next coin toss.
Also there are two things we are uncertain about, we are uncertain about θ (the coins frequency) and we are uncertain about p(H∣θ), the next coin toss
So you are saying that “we” are uncertain about the degree of belief/plausibility that what our brain is going to assign? Then who are “we” exactly? Apologies for being glib but I really don’t understand
Also, it is a crime to have different priors given the same information according to us objective Bayesians so that can’t be the issue