I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.