Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.
The computer programmer will do a back of the envelope calculation, something like: “The set of all natural numbers” is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer “syntactically”.
Of course, the mathematician might claim that the “entities” that they’re manipulating are more than just the syntax, and are actually much bigger. That is, they might answer “semantically”. Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like “how big is it?” in terms of those mental images. A math professor asking “How big is it?” might accept answers like “it’s a subset of the integers” or “It’s a superset of the power set of reals”. The programmer’s answer of “maybe 30 bytes” seems, from the mathematical perspective, about as ridiculous as “It’s about three feet long right now, but I can write it longer if you want”.
The weirdly small models are only weirdly small if what you thought were manipulating was something other than finite (and therefore Godel-numberable) syntax.
Of course, the mathematician might claim that the “entities” that they’re manipulating are more than just the syntax, and are actually much bigger. That is, they might answer “semantically”.
Modelsare semantics. The whole point of models is to give semantic meaning to syntactic strings.
I haven’t studied the proof of the Löwenheim–Skolem theorem, but I would be surprised if it were as trivial as the observation that there are only countably many sentences in ZFC. It’s not at all clear to me that you can convert the language in which ZFC is expressed into a model for ZFC in a way that would establish the Löwenheim–Skolem theorem.
I have studied the proof of the (downward) Lowenheim-Skolem theorem—as an undergraduate, so you should take my conclusions with some salt—but my understanding of the (downward) Lowenheim-Skolem theorem was exactly that the proof builds a model out of the syntax of the first-order theory in question.
I’m not saying that the proof is trivial—what I’m saying is that holding Godel-numberability and the possibility of a strict formalist interpretation of mathematics in your mind provides a helpful intuition for the result.
ZFC’s countable model isn’t that weird.
Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.
The computer programmer will do a back of the envelope calculation, something like: “The set of all natural numbers” is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer “syntactically”.
Of course, the mathematician might claim that the “entities” that they’re manipulating are more than just the syntax, and are actually much bigger. That is, they might answer “semantically”. Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like “how big is it?” in terms of those mental images. A math professor asking “How big is it?” might accept answers like “it’s a subset of the integers” or “It’s a superset of the power set of reals”. The programmer’s answer of “maybe 30 bytes” seems, from the mathematical perspective, about as ridiculous as “It’s about three feet long right now, but I can write it longer if you want”.
The weirdly small models are only weirdly small if what you thought were manipulating was something other than finite (and therefore Godel-numberable) syntax.
Models are semantics. The whole point of models is to give semantic meaning to syntactic strings.
I haven’t studied the proof of the Löwenheim–Skolem theorem, but I would be surprised if it were as trivial as the observation that there are only countably many sentences in ZFC. It’s not at all clear to me that you can convert the language in which ZFC is expressed into a model for ZFC in a way that would establish the Löwenheim–Skolem theorem.
I have studied the proof of the (downward) Lowenheim-Skolem theorem—as an undergraduate, so you should take my conclusions with some salt—but my understanding of the (downward) Lowenheim-Skolem theorem was exactly that the proof builds a model out of the syntax of the first-order theory in question.
I’m not saying that the proof is trivial—what I’m saying is that holding Godel-numberability and the possibility of a strict formalist interpretation of mathematics in your mind provides a helpful intuition for the result.