Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic.
This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake.
Dealing with limited (as opposed to imperfect) computational resources would be more interesting—I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?
This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake.
Dealing with limited (as opposed to imperfect) computational resources would be more interesting—I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?