I think there’s something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).
Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.
A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes’ theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:
If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn’t try to find too hard), and
I am not sure whether such an approach isn’t obsolete in the light of discussions about updateless decision theories and similar stuff.
Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual’s lifetime or from millions of years of evolution baking in some ‘hard-coded’ priors to the human brain.
When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary ‘learning’ together with a lifetime of actual learning and assign a single real number to it and call it a ‘prior’ but this is just a way of making calculation tractable.
I think there’s something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).
Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.
A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes’ theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:
If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn’t try to find too hard), and
I am not sure whether such an approach isn’t obsolete in the light of discussions about updateless decision theories and similar stuff.
Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual’s lifetime or from millions of years of evolution baking in some ‘hard-coded’ priors to the human brain.
When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary ‘learning’ together with a lifetime of actual learning and assign a single real number to it and call it a ‘prior’ but this is just a way of making calculation tractable.