This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty.
I don’t think the key issue is the imperfect Bayesianism of humans. I suppose that the discussed certainty of a probability has a lot to do with its dependence on priors—the more sensitive the probability is to change in priors we find arbitrary, the less certain it feels. Priors themselves feel most uncertain, while probabilities obtained from evidence-based calculations, especially those quasi-frequentist probabilities, as P(heads in next flip), depend on many priors and change in any single prior doesn’t move them too far. Perfect Bayesians may not have the feeling, but still have priors.
Sensitivity to priors is the same as sensitivity to new evidence. And when we’re sensitive to new evidence, our estimates are likely to change, which is another reason they’re uncertain.
The reason this phenomena occurs is because we are uncertain about some fundamental frequency, or a model more complex than a simple frequency model, and probability(heads|frequency of heads is x)=x.
I think there’s something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).
Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.
A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes’ theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:
If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn’t try to find too hard), and
I am not sure whether such an approach isn’t obsolete in the light of discussions about updateless decision theories and similar stuff.
Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual’s lifetime or from millions of years of evolution baking in some ‘hard-coded’ priors to the human brain.
When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary ‘learning’ together with a lifetime of actual learning and assign a single real number to it and call it a ‘prior’ but this is just a way of making calculation tractable.
I don’t think the key issue is the imperfect Bayesianism of humans. I suppose that the discussed certainty of a probability has a lot to do with its dependence on priors—the more sensitive the probability is to change in priors we find arbitrary, the less certain it feels. Priors themselves feel most uncertain, while probabilities obtained from evidence-based calculations, especially those quasi-frequentist probabilities, as P(heads in next flip), depend on many priors and change in any single prior doesn’t move them too far. Perfect Bayesians may not have the feeling, but still have priors.
Sensitivity to priors is the same as sensitivity to new evidence. And when we’re sensitive to new evidence, our estimates are likely to change, which is another reason they’re uncertain.
The reason this phenomena occurs is because we are uncertain about some fundamental frequency, or a model more complex than a simple frequency model, and probability(heads|frequency of heads is x)=x.
I think there’s something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).
Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.
A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes’ theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:
If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn’t try to find too hard), and
I am not sure whether such an approach isn’t obsolete in the light of discussions about updateless decision theories and similar stuff.
Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual’s lifetime or from millions of years of evolution baking in some ‘hard-coded’ priors to the human brain.
When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary ‘learning’ together with a lifetime of actual learning and assign a single real number to it and call it a ‘prior’ but this is just a way of making calculation tractable.