I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.
I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.
The formulations of “logical omniscience is a problem for Bayesian reasoners” that I have seen are not sufficiently worrying; actually creating a Dutch Book would require the formulating party to have the logical omniscience the Bayesian lacks which is not a situation we encounter very much.
It’s just that logical omniscience is required to quickly identify the (pre-determined) truth value of incredibly complicated mathematical equations; if you want to exploit my not knowing the thousandth mersenne prime, you have to know the thousandth mersenne prime to do so, and humans generally don’t encounter beings that have significantly more logical knowledge.
Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic.
This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake.
Dealing with limited (as opposed to imperfect) computational resources would be more interesting—I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?
Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic.
Could you explain in more detail why Bayesian epistemology can’t be built without such an assumption? All arguments I have seen went along the lines “unless you are logically omniscient, you may end up having inconsistent probabilities”. That may be aesthetically unpleasant when we think about ideal Bayesian agents, but doesn’t seem to be a grave concern for Bayesianism as a prescriptive norm of human reasoning.
Could you explain in more detail why Bayesian epistemology can’t be built without such an assumption?
Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don’t assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov’s), then we are no longer doing Bayesian epistemology.
Is there more to it than that it is the definition of Bayesian epistemology?
Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn’t have this belief a priori.
Logical omniscience comes from probability “statics,” not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven’t realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions—I don’t see it as a fatal flaw.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.
I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.
I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.
The formulations of “logical omniscience is a problem for Bayesian reasoners” that I have seen are not sufficiently worrying; actually creating a Dutch Book would require the formulating party to have the logical omniscience the Bayesian lacks which is not a situation we encounter very much.
Sorry, I’m not sure I understand what you mean. Could you elaborate?
It’s just that logical omniscience is required to quickly identify the (pre-determined) truth value of incredibly complicated mathematical equations; if you want to exploit my not knowing the thousandth mersenne prime, you have to know the thousandth mersenne prime to do so, and humans generally don’t encounter beings that have significantly more logical knowledge.
This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake.
Dealing with limited (as opposed to imperfect) computational resources would be more interesting—I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?
Could you explain in more detail why Bayesian epistemology can’t be built without such an assumption? All arguments I have seen went along the lines “unless you are logically omniscient, you may end up having inconsistent probabilities”. That may be aesthetically unpleasant when we think about ideal Bayesian agents, but doesn’t seem to be a grave concern for Bayesianism as a prescriptive norm of human reasoning.
Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don’t assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov’s), then we are no longer doing Bayesian epistemology.
Is there more to it than that it is the definition of Bayesian epistemology?
Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn’t have this belief a priori.
Logical omniscience comes from probability “statics,” not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven’t realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions—I don’t see it as a fatal flaw.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my “degree of belief” in a possible statement A is 2, I can be Dutch booked. But now that I’m licensed to disbelieve entailments (so long as I take myself to be ignorant that they’re entailments), perhaps I justifiably believe that I can’t be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, …, Pn, I can always potentially justifiably believe the conditional “If the premises P1, …, Pn are true, then C is correct” has low probability—even if the argument is purely deductive.
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don’t see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition—I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am “licensed to disbelieve entailments” only in order to keep the system well defined, in practice I don’t disbelieve them once I know their status. The “take myself to be ignorant that they’re entailments” part is irrational.
I must admit that I haven’t a clear idea how to formalise this. I know what I do in practice: when I don’t know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.