Could you explain in more detail why Bayesian epistemology can’t be built without such an assumption?
Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don’t assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov’s), then we are no longer doing Bayesian epistemology.
Is there more to it than that it is the definition of Bayesian epistemology?
Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn’t have this belief a priori.
Logical omniscience comes from probability “statics,” not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven’t realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions—I don’t see it as a fatal flaw.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.
Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don’t assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov’s), then we are no longer doing Bayesian epistemology.
Is there more to it than that it is the definition of Bayesian epistemology?
Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn’t have this belief a priori.
Logical omniscience comes from probability “statics,” not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven’t realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions—I don’t see it as a fatal flaw.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.