Let me see if I can put that in my own words; if not, I didn’t understand it. You are saying that humans, who do not operate strictly by PA, know that a proof of the existence of a proof is itself a proof; but a reasoner strictly limited to PA would not know any such thing, because it’s not a theorem of PA. (PA being just an example—it could be any formal system, or at least any formal system that doesn’t include the concept of proofs among its atoms, or concepts.) So such a reasoner can be shown a proof that a proof of A exists, but will not know that A is therefore a theorem of PA. Correct?
To me this seems more like a point about limitations of PA than about AI or logic per se; my conclusion would be “therefore, any serious AI needs a formal system with more oomph than PA”. Is this a case of looking at PA “because that’s where the light is”, ie it’s easy to reason about; or is there a case that solving such problems can inform reasoning about more realistic systems?
Strictly speaking, Lob’s Theorem doesn’t show that PA doesn’t prove that the provability of any statement implies that statement. It just shows that if you have a statement in PA of the form (If S is provable, then S), you can use this to prove S. The part about PA not proving any implications of that form for a false S only follows if we assume that PA is sound.
Therefore, replacing PA with a stronger system or adding primitive concepts of provability in place of PA’s complicated arithmetical construction won’t help. As long as it can do everything PA can do (for example, prove that it can prove things it can prove), it will always be able to get from (If S is provable, then S) to S, even if S is 3*5=56..
Let me see what happens if I put in a specific example. Suppose that
If 3*5=35 is a theorem of PA, then 3*5=35
is a theorem of PA. Let me refer to “3*5=35 is a theorem” as sentence 1; “3*5=35″ as sentence 2, and the implication 1->2 as sentence 3. Now, if 3 is a theorem, then you can use PA to prove 2 even without actually showing 1; and then PA has proven a falsehood, and is inconsistent. Is that a correct statement of the problem?
If so… I seem to have lost track of the original difficulty, sorry. Why is it a worry that PA will assert that it can prove something false, but not a worry that it will assert something false? If you’re going to worry that sentence 3 is a theorem, why not go straight to worrying that sentence 2 is a theorem?
We generally take it for granted that sentence 2 (because it is false) is not a theorem (and therefore sentence 1 is false), and the argument is meant to show that sentence 3 is therefore also not a theorem (even though it is true, since sentence 1 is false). This is a problem because we would like to use reasoning along the lines of sentence 3.
To see the real issue, you should replace sentence 2 with a conjecture whose truth you don’t yet know but would like to know (and modify sentences 1 and 3 correspondingly). Now wouldn’t you like sentence 3 to be a true? If you knew that sentence 1 was true, wouldn’t you like to conclude that sentence 2 is true? Yet if you’re PA, then you can’t do that.
“therefore, any serious AI needs a formal system with more oomph than PA”
The problem with that is that the same argument goes through in exactly the same way with any stronger system replacing PA. You might first try something like adding a rule “if PA proves that PA proves S, then S”. This solves your original problem, but introduces new ones: there are now new statements that your system can prove that it can prove, but that it can’t prove. Eliezer discusses this system, under the name PA+1, in You Provably Can’t Trust Yourself .
Let me see if I can put that in my own words; if not, I didn’t understand it. You are saying that humans, who do not operate strictly by PA, know that a proof of the existence of a proof is itself a proof; but a reasoner strictly limited to PA would not know any such thing, because it’s not a theorem of PA. (PA being just an example—it could be any formal system, or at least any formal system that doesn’t include the concept of proofs among its atoms, or concepts.) So such a reasoner can be shown a proof that a proof of A exists, but will not know that A is therefore a theorem of PA. Correct?
To me this seems more like a point about limitations of PA than about AI or logic per se; my conclusion would be “therefore, any serious AI needs a formal system with more oomph than PA”. Is this a case of looking at PA “because that’s where the light is”, ie it’s easy to reason about; or is there a case that solving such problems can inform reasoning about more realistic systems?
Strictly speaking, Lob’s Theorem doesn’t show that PA doesn’t prove that the provability of any statement implies that statement. It just shows that if you have a statement in PA of the form (If S is provable, then S), you can use this to prove S. The part about PA not proving any implications of that form for a false S only follows if we assume that PA is sound.
Therefore, replacing PA with a stronger system or adding primitive concepts of provability in place of PA’s complicated arithmetical construction won’t help. As long as it can do everything PA can do (for example, prove that it can prove things it can prove), it will always be able to get from (If S is provable, then S) to S, even if S is 3*5=56..
Let me see what happens if I put in a specific example. Suppose that
is a theorem of PA. Let me refer to “3*5=35 is a theorem” as sentence 1; “3*5=35″ as sentence 2, and the implication 1->2 as sentence 3. Now, if 3 is a theorem, then you can use PA to prove 2 even without actually showing 1; and then PA has proven a falsehood, and is inconsistent. Is that a correct statement of the problem?
If so… I seem to have lost track of the original difficulty, sorry. Why is it a worry that PA will assert that it can prove something false, but not a worry that it will assert something false? If you’re going to worry that sentence 3 is a theorem, why not go straight to worrying that sentence 2 is a theorem?
We generally take it for granted that sentence 2 (because it is false) is not a theorem (and therefore sentence 1 is false), and the argument is meant to show that sentence 3 is therefore also not a theorem (even though it is true, since sentence 1 is false). This is a problem because we would like to use reasoning along the lines of sentence 3.
To see the real issue, you should replace sentence 2 with a conjecture whose truth you don’t yet know but would like to know (and modify sentences 1 and 3 correspondingly). Now wouldn’t you like sentence 3 to be a true? If you knew that sentence 1 was true, wouldn’t you like to conclude that sentence 2 is true? Yet if you’re PA, then you can’t do that.
The problem with that is that the same argument goes through in exactly the same way with any stronger system replacing PA. You might first try something like adding a rule “if PA proves that PA proves S, then S”. This solves your original problem, but introduces new ones: there are now new statements that your system can prove that it can prove, but that it can’t prove. Eliezer discusses this system, under the name PA+1, in You Provably Can’t Trust Yourself .