In theories of Bayesianism, the axioms of probability theory are conventionally assumed to say that all logical truths have probability one, and that the probability of a disjunction of logically inconsistent statements is the sum of their probabilities. Corresponding to the second and third Kolmogorov axiom.
If one then e.g. regards the Peano axioms as certain, then all theorems of Peano arithmetic must also be certain, because those are just logical consequences. And all statements which can be disproved in Peano arithmetic then must have probability zero. So the above version of the Kolmogorov axioms is assuming we are logically omniscient. So this form of Bayesianism doesn’t allow us to assign anything like 0.5 probability to the googolth digit of pi being odd: We must assign 1 if it’s odd, or 0 if it’s even.
I think the simple solution is to not talk about logical tautologies and contradictions when expressing the Kolmogorov axioms for a theory of subjective Bayesianism. Instead talk about what we actually know a priori, not about tautologies which we merely could know a priori (if we were logically omniscient). Then the second Kolmogorov axiom says that statements we actually know a priori have to be assigned probability 1, and disjunctions of statements actually known a priori to be mutually exclusive have to be assigned the sum of their probabilities.
Then we are allowed to assign probabilities less than 1 to statements where we don’t actually know that they are tautologies, e.g. 0.5 to “the googolth digit of pi is odd” even if this happens to be, unbeknownst to us, a theorem of Peano arithmetic.
I think the simple solution is to not talk about logical tautologies and contradictions when expressing the Kolmogorov axioms for a theory of subjective Bayesianism. Instead talk about what we actually know a priori, not about tautologies which we merely could know a priori (if we were logically omniscient).
Yes, absolutely. When I apply probability theory it should represent my state of knowledge, not state of knowledge of some logically omniscient being. For me it seems such an obvious thing that I struggle to understand why it’s still not a standard approach.
So are there some hidden paradoxes of such approach that I just do not see yet? Or maybe some issues with formalization of the axioms?
In theories of Bayesianism, the axioms of probability theory are conventionally assumed to say that all logical truths have probability one, and that the probability of a disjunction of logically inconsistent statements is the sum of their probabilities. Corresponding to the second and third Kolmogorov axiom.
If one then e.g. regards the Peano axioms as certain, then all theorems of Peano arithmetic must also be certain, because those are just logical consequences. And all statements which can be disproved in Peano arithmetic then must have probability zero. So the above version of the Kolmogorov axioms is assuming we are logically omniscient. So this form of Bayesianism doesn’t allow us to assign anything like 0.5 probability to the googolth digit of pi being odd: We must assign 1 if it’s odd, or 0 if it’s even.
I think the simple solution is to not talk about logical tautologies and contradictions when expressing the Kolmogorov axioms for a theory of subjective Bayesianism. Instead talk about what we actually know a priori, not about tautologies which we merely could know a priori (if we were logically omniscient). Then the second Kolmogorov axiom says that statements we actually know a priori have to be assigned probability 1, and disjunctions of statements actually known a priori to be mutually exclusive have to be assigned the sum of their probabilities.
Then we are allowed to assign probabilities less than 1 to statements where we don’t actually know that they are tautologies, e.g. 0.5 to “the googolth digit of pi is odd” even if this happens to be, unbeknownst to us, a theorem of Peano arithmetic.
Yes, absolutely. When I apply probability theory it should represent my state of knowledge, not state of knowledge of some logically omniscient being. For me it seems such an obvious thing that I struggle to understand why it’s still not a standard approach.
So are there some hidden paradoxes of such approach that I just do not see yet? Or maybe some issues with formalization of the axioms?
Yeah, I think it’s that one