I realized after reading this that I’d stated the Compactness Theorem much more strongly than I needed, and that I only needed the fact that infinite semantic inconsistency implies finite semantic inconsistency, never mind syntactic proofs of inconsistency, so I did a quick rewrite accordingly. Hopefully this addresses your worries about “muddled description”, although initially I was confused about what you meant by “muddled” since I’d always carefully distinguished semantics from syntax at each point in the post.
I was also confused by what you meant by “nonstandard models resulting from the Compactness Theorem” versus “nonstandard models resulting from the Incompleteness Theorem”—the nonstandard models are just there, after all, they don’t poof into existence as a result of one Theorem or the other being proved. But yes, the Compactness Theorem shows that even adjoining all first-order stateable truths about the natural numbers to PA (resulting in a theory not describable within PA) would still give a theory with nonstandard models.
I think “semantic consistency” is not a very good phrase, and you should consider replacing it with “satisfiability” or, if that seems too technical, “realizability”. The word “inconsistent” tells us that there’s some sort of contradiction hidden within. But there could be statements without contradiction that are yet not realizable—not in our logic, thanks to the Completeness theorem, but in some other, perhaps less useful one. Imagine for example that you tried to develop mathematical logic from scratch, and defined “models” in such a way that only finite sets can serve as their domains (perhaps because you’re a hardcore finitist or something). Then your class of models is too poor and doesn’t sustain the Completeness theorem. There are consistent finite sets of statements, from which no contradiction may be syntactically derived, that are only realizable in infinite models and so are not realizable at all in this hypothetical logic. It feels wrong to call them “semantically inconsistent” even though you can technically do that of course, it’s just a definition. “Realizable” seems better.
I feel that this example is part of a larger trend. Think of first-order logic as a perfect fit between syntactic notions (how formulas are built up, what is a proof) and semantic notions (how to assign a truth value to a statement in a model, what models exist and how they’re defined). Apriori it’s not clear or preordained that these two fit together like a hand in a glove, but thanks to Soundness and Completeness theorems they actually do. You keep using that fit to jump seamlessly between semantic notions and syntactic notions, and although you’re not committing any error, I think the result is confusing; in “alternative universes”—other logics—the fit doesn’t exist or it’s very different, and to gain understanding and appreciation of how logic works the two must be kept sharply separate in one’s mind. I’m not saying that you don’t appreciate the difference—I’m saying that pedagogically your posts in this sequence fail in making the reader understand it.
I was confused about what you meant by “muddled” since I’d always carefully distinguished semantics from syntax at each point in the post.
Here’s an example from an earlier post:
...Gosh. I think I see the idea now. It’s not that ‘axioms’ are mathematicians asking for you to just assume some things about numbers that seem obvious but can’t be proven. Rather, axioms pin down that we’re talking about numbers as opposed to something else.
“Exactly. That’s why the mathematical study of numbers is equivalent to the logical study of which conclusions follow inevitably from the number-axioms. The way that theorems like 2 + 2 = 4 are syntactically provable from those axioms reflects the way that 2 + 2 = 4 is semantically implied within this unique mathematical universe that the axioms pin down.
Up until this point in the post you were only talking about how particular sentence are able or not able semantically to “pin down” a particular model. But now suddenly you’re talking about “follow inevitably” and “syntactically provable”, and the reader who’s not trained in logic has no clues to tell them that you suddenly switched tracks and are talking about a completely different thing. The “that’s why” is incoherent, because the set of syntactic conclusions from number-axioms is smaller than the set of semantic truths about standard numbers. Here the way you leap between syntactic and semantic notions leads you astray. Your “the way that...” sentence alludes to a completeness theorem which second-order logic, which you’re talking about here, doesn’t have! Think about it: second-order PA does have only one model, but the fact that in this model 2+2=4 does not give you license to deduce that 2+2=4 is syntactically provable from second-order PA axioms. The “reflects” in your sentence is wrong! (read the answer to this post for a useful summary, which also clarifies how limitations established by Godel’s Incompleteness Theorems still apply in a certain way to second-order arithmetic)
I’m convinced that a reader of this sequence who is not educated in logic simply won’t notice the leaps between syntax and semantics that you habitually make in an undisciplined fashion, and will not get a clear picture of why and which nonstandard models exist and how proof systems, Completeness and Incompleteness interact with their existence.
I was also confused by what you meant by “nonstandard models resulting from the Compactness Theorem” versus “nonstandard models resulting from the Incompleteness Theorem”—the nonstandard models are just there, after all, they don’t poof into existence as a result of one Theorem or the other being proved.
Well, I thought I made that clear in my comment. Yes, they are there, but you can imagine other logics where one of the theorems holds but not the other, and see that one kind of the nonstandard models remains and the other disappears. The difference between them to working mathematicians and logicians is vast. Lowenhelm-Skolem was proved, I think in 1917 or thereabouts, but until Godel’s Incompleteness Theorem appeared nobody thought that it demonstrated that first-order logic is not suitable to formalize mathematics because it doesn’t “pin down standard numbers”.
That’s why I think that your emphasis on “pins down a unique model” is wrongheaded and doesn’t reflect the real concerns mathematicians and logicians had and continue to have about how well axiomatic systems formalize mathematics. It’s because this property is too coarse to distinguish e.g. between incomplete and complete theories—even complete theories have nonstandard models in first-order logic. In an alternate universe where Godel’s Incompleteness doesn’t exist and PA is a complete first-order theory that proves or disproves any statement about natural numbers, approximately nobody cares that it has nonstandard models due to Compactness, and everybody’s very happy that they have a set of axioms that are able in principle to answer all questions you can ask about natural numbers. (if you protest that PA is incomplete in any alternate universe, consider that there are complete first-order theories, e.g. of real closed fields). You’re free to disagree with that and to insist that categoricity—having only one model up to isomorphism—must be the all-important property of the logic we want to use, but your readers are ill-equipped to agree or disagree in an informed way because your description muddles the hugely important difference between those two kinds of nonstandard models. To mathematicians interested in foundations, the fact that PA is incomplete is a blow to Hilbert’s program and more generally to the project of formalizing mathematical meaning, while nonstandard models existing due to compactness are at worst an annoying technical distraction, and at best a source of interesting constructions like nonstandard analysis.
I’ll try a couple more edits, but keep in mind that this isn’t aimed at logicians concerned about Hilbert’s program, it’s aimed at improving gibberish-detection skills (sentences that can’t mean things) and avoiding logic abuse (trying to get empirical facts from first principles) and improving people’s metaethics and so on.
I realized after reading this that I’d stated the Compactness Theorem much more strongly than I needed, and that I only needed the fact that infinite semantic inconsistency implies finite semantic inconsistency, never mind syntactic proofs of inconsistency, so I did a quick rewrite accordingly. Hopefully this addresses your worries about “muddled description”, although initially I was confused about what you meant by “muddled” since I’d always carefully distinguished semantics from syntax at each point in the post.
I was also confused by what you meant by “nonstandard models resulting from the Compactness Theorem” versus “nonstandard models resulting from the Incompleteness Theorem”—the nonstandard models are just there, after all, they don’t poof into existence as a result of one Theorem or the other being proved. But yes, the Compactness Theorem shows that even adjoining all first-order stateable truths about the natural numbers to PA (resulting in a theory not describable within PA) would still give a theory with nonstandard models.
I think “semantic consistency” is not a very good phrase, and you should consider replacing it with “satisfiability” or, if that seems too technical, “realizability”. The word “inconsistent” tells us that there’s some sort of contradiction hidden within. But there could be statements without contradiction that are yet not realizable—not in our logic, thanks to the Completeness theorem, but in some other, perhaps less useful one. Imagine for example that you tried to develop mathematical logic from scratch, and defined “models” in such a way that only finite sets can serve as their domains (perhaps because you’re a hardcore finitist or something). Then your class of models is too poor and doesn’t sustain the Completeness theorem. There are consistent finite sets of statements, from which no contradiction may be syntactically derived, that are only realizable in infinite models and so are not realizable at all in this hypothetical logic. It feels wrong to call them “semantically inconsistent” even though you can technically do that of course, it’s just a definition. “Realizable” seems better.
I feel that this example is part of a larger trend. Think of first-order logic as a perfect fit between syntactic notions (how formulas are built up, what is a proof) and semantic notions (how to assign a truth value to a statement in a model, what models exist and how they’re defined). Apriori it’s not clear or preordained that these two fit together like a hand in a glove, but thanks to Soundness and Completeness theorems they actually do. You keep using that fit to jump seamlessly between semantic notions and syntactic notions, and although you’re not committing any error, I think the result is confusing; in “alternative universes”—other logics—the fit doesn’t exist or it’s very different, and to gain understanding and appreciation of how logic works the two must be kept sharply separate in one’s mind. I’m not saying that you don’t appreciate the difference—I’m saying that pedagogically your posts in this sequence fail in making the reader understand it.
Here’s an example from an earlier post:
Up until this point in the post you were only talking about how particular sentence are able or not able semantically to “pin down” a particular model. But now suddenly you’re talking about “follow inevitably” and “syntactically provable”, and the reader who’s not trained in logic has no clues to tell them that you suddenly switched tracks and are talking about a completely different thing. The “that’s why” is incoherent, because the set of syntactic conclusions from number-axioms is smaller than the set of semantic truths about standard numbers. Here the way you leap between syntactic and semantic notions leads you astray. Your “the way that...” sentence alludes to a completeness theorem which second-order logic, which you’re talking about here, doesn’t have! Think about it: second-order PA does have only one model, but the fact that in this model 2+2=4 does not give you license to deduce that 2+2=4 is syntactically provable from second-order PA axioms. The “reflects” in your sentence is wrong! (read the answer to this post for a useful summary, which also clarifies how limitations established by Godel’s Incompleteness Theorems still apply in a certain way to second-order arithmetic)
I’m convinced that a reader of this sequence who is not educated in logic simply won’t notice the leaps between syntax and semantics that you habitually make in an undisciplined fashion, and will not get a clear picture of why and which nonstandard models exist and how proof systems, Completeness and Incompleteness interact with their existence.
Well, I thought I made that clear in my comment. Yes, they are there, but you can imagine other logics where one of the theorems holds but not the other, and see that one kind of the nonstandard models remains and the other disappears. The difference between them to working mathematicians and logicians is vast. Lowenhelm-Skolem was proved, I think in 1917 or thereabouts, but until Godel’s Incompleteness Theorem appeared nobody thought that it demonstrated that first-order logic is not suitable to formalize mathematics because it doesn’t “pin down standard numbers”.
That’s why I think that your emphasis on “pins down a unique model” is wrongheaded and doesn’t reflect the real concerns mathematicians and logicians had and continue to have about how well axiomatic systems formalize mathematics. It’s because this property is too coarse to distinguish e.g. between incomplete and complete theories—even complete theories have nonstandard models in first-order logic. In an alternate universe where Godel’s Incompleteness doesn’t exist and PA is a complete first-order theory that proves or disproves any statement about natural numbers, approximately nobody cares that it has nonstandard models due to Compactness, and everybody’s very happy that they have a set of axioms that are able in principle to answer all questions you can ask about natural numbers. (if you protest that PA is incomplete in any alternate universe, consider that there are complete first-order theories, e.g. of real closed fields). You’re free to disagree with that and to insist that categoricity—having only one model up to isomorphism—must be the all-important property of the logic we want to use, but your readers are ill-equipped to agree or disagree in an informed way because your description muddles the hugely important difference between those two kinds of nonstandard models. To mathematicians interested in foundations, the fact that PA is incomplete is a blow to Hilbert’s program and more generally to the project of formalizing mathematical meaning, while nonstandard models existing due to compactness are at worst an annoying technical distraction, and at best a source of interesting constructions like nonstandard analysis.
I’ll try a couple more edits, but keep in mind that this isn’t aimed at logicians concerned about Hilbert’s program, it’s aimed at improving gibberish-detection skills (sentences that can’t mean things) and avoiding logic abuse (trying to get empirical facts from first principles) and improving people’s metaethics and so on.