The presentation of the natural numbers is meant to be standard, including the (well-known and proven) idea that it requires second-order logic to pin them down. There’s some further controversy about second-order logic which will be discussed in a later post.
I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.
On the other hand, I’ve surely never seen a general account of meaningfulness which puts logical pinpointing alongside causal link-tracing to delineate two different kinds of correspondence within correspondence theories of truth. To whatever extent any of this is a standard position, it’s not nearly widely-known enough or explicitly taught in those terms to general mathematicians outside model theory and mathematical logic, just like the standard position on “proof”. Nor does any of it appear in the S. E. P. entry on meaning.
Bug: Higher-order logic (a standard term) means “infinite-order logic” (not a standard term), not “logic of order greater 1″ (also not a standard term). (For whatever reason, neither the Wikipedia nor the SEP entry seem to come out and say this, but every reference I can remember used the terms like that, and the usage in SEP seems to imply it too, e.g. “This second-order expressibility of the power-set operation permits the simulation of higher-order logic within second order.”)
i) you don’t actually need to jump directly to second order logic in to get a categorical axiomatization of the natural numbers. There are several weaker ways to do the job: L_omega_omega (which allows infinitary conjunctions), adding a primitive finiteness operator, adding a primitive ancestral operator, allowing the omega rule (i.e. from the infinitely many premises P(0), P(1), … P(n), … infer AnP(n)). Second order logic is more powerful than these in that it gives a quasi categorical axiomatization of the universe of sets (i.e. of any two models of ZFC_2, they are either isomorphic or one is isomorphic to an initial segment of the other).
ii) although there is a minority view to the contrary, it’s typically thought that going second order doesn’t help with determinateness worries (i.e. roughly what you are talking about with regard to “pinning down” the natural numbers). The point here is that going second order only works if you interpret the second order quantifiers “fully”, i.e. as ranging over the whole power set of the domain rather than some proper subset of it. But the problem is: how can we rule out non-full interpretations of the quantifiers? This seems like just the same sort of problem as ruling out non-standard models of arithmetic (“the same sort”, not the same, because for the reasons mentioned in (i) it is actually more stringent of a condition.) The point is if you for some reason doubt that we have a categorical grasp of the natural numbers, you are certainly not going to grant that we can enforce a full interpretation of the second order quantifiers. And although it seems intuitively obvious that we have a categorical grasp of the natural numbers, careful consideration of the first incompleteness theorem shows that this is by no means clear.
iii) Given that categoricity results are only up to isomorphism, I don’t see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.
iii) Given that categoricity results are only up to isomorphism, I don’t see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.
Generally, things being identical up to isomorphism is considered to make them the same thing in all senses that matter. If something has all the same properties as the natural numbers, in every respect and every particular, then that’s no different from merely changing the names. This is a pretty basic mathematical concept, and that you aren’t familiar with it makes me question the rest of this comment as well.
I think philosophers who think that the categoricity of second-order Peano arithmetic allows us to refer to the natural numbers uniquely tend to also reject the causal theory of reference, precisely because the causal theory of reference is usually put as requiring all reference to be causally guided. Among those, lots of people more-or-less think that references can be fixed by some kinds of description, and I think logical descriptions of this kind would be pretty uncontroversial.
OTOH, for some reason everyone in philosophy of maths is allergic to second-order logic (blame Quine), so the categoricity argument doesn’t always hold water. For some discussion, there’s a section in the SEP entry on Philosophy of Mathematics.
(To give one of the reasons why people don’t like SOL: to interpret it fully you seem to need set theory. Properties basically behave like sets, and so you can make SOL statements that are valid iff the Continuum Hypothesis is true, for example. It seems wrong that logic should depend on set theory in this way.)
This is a facepalm “Duh” moment, I hear this criticism all the time but it does not mean that “logic” depends on “set theory”. There is a confusion here between what can be STATED and what can be KNOWN. The criticism only has any force if you think that all “logical truths” ought to be recognizable so that they can be effectively enumerated. But the critics don’t mind that for any effective enumeration of theorems of arithmetic, there are true statements about integers that won’t be included—we can’t KNOW all the true facts about integers, so the criticism of second-order logic boils down to saying that you don’t like using the word “logic” to be applied to any system powerful enough to EXPRESS quantified statements about the integers, but only to systems weak enough that all their consequences can be enumerated.
This demand is unreasonable. Even if logic is only about “correct reasoning”, the usual framework given by SOL does not presume any dubious principles of reasoning and ZF proves its consistency. The existence of propositions which are not deductively settled by that framework but which can be given mathematical interpretations means nothing more than that our repertoire of “techniques of correct reasoning”, which has grown over the centuries, isn’t necessarily finalized.
“Because otherwise you can’t talk about numbers as opposed to something else,”
The Abstract Algebra course I took presented it in this fashion. I have a hard time seeing how you could even have abstract algebra without this notion.
What about Steven Landsburg’s frequent crowing on the Platonicity of math and how numbers are real because we can “directly perceive them”? How does this relate to it?
While I greatly sympathize with the “Platonicity of math”, I can’t shake the idea that my reasoning about numbers isn’t any kind of direct perception, but just reasoning about an in-memory representation of a model that is ultimately based on all the other systems that behave like numbers.
I find the arguments about how not all true statements regarding the natural numbers can be inferred via first-order logic tedious. It doesn’t seem like our understanding of the natural numbers is particularly impoverished because of it.
so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way
I remember explaining the Axiom of Choice in this way to a fellow undergraduate on my integration theory course in late 2000. But of course it never occurred to me to write it down, so you only have my word for this :-)
I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.
If memory serves, Hofstadter uses roughly this explanation in GEB.
This is pretty close to how I remember the discussion in GEB. He has a good discussion of non-Euclidean geometry. He emphasizes that originally the negation of Parallel Postulate was viewed as absurd, but that now we can understand that the non-Euclidean axioms are perfectly reasonable statements which describe something other than plane geometry we are used to. Later he has a bit of a discussion of what a model of PA + NOT(CON(PA)) would look like. I remember finding it pretty confusing, and I didn’t really know what he was getting at until I red some actual logic theory textbooks. But he did get across the idea that the axioms would still describe something, but that something would be larger and stranger than the integers we think we know.
IRC, Hofstadter is a firm formalist, and I don’t see how that square with EYs apparent Correspondence Theory. At least
i don’t see the point in correspondence if hat is being corresponded to is itself generated by axioms.
Mainstream status:
The presentation of the natural numbers is meant to be standard, including the (well-known and proven) idea that it requires second-order logic to pin them down. There’s some further controversy about second-order logic which will be discussed in a later post.
I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.
On the other hand, I’ve surely never seen a general account of meaningfulness which puts logical pinpointing alongside causal link-tracing to delineate two different kinds of correspondence within correspondence theories of truth. To whatever extent any of this is a standard position, it’s not nearly widely-known enough or explicitly taught in those terms to general mathematicians outside model theory and mathematical logic, just like the standard position on “proof”. Nor does any of it appear in the S. E. P. entry on meaning.
Very nice post!
Bug: Higher-order logic (a standard term) means “infinite-order logic” (not a standard term), not “logic of order greater 1″ (also not a standard term). (For whatever reason, neither the Wikipedia nor the SEP entry seem to come out and say this, but every reference I can remember used the terms like that, and the usage in SEP seems to imply it too, e.g. “This second-order expressibility of the power-set operation permits the simulation of higher-order logic within second order.”)
A few points:
i) you don’t actually need to jump directly to second order logic in to get a categorical axiomatization of the natural numbers. There are several weaker ways to do the job: L_omega_omega (which allows infinitary conjunctions), adding a primitive finiteness operator, adding a primitive ancestral operator, allowing the omega rule (i.e. from the infinitely many premises P(0), P(1), … P(n), … infer AnP(n)). Second order logic is more powerful than these in that it gives a quasi categorical axiomatization of the universe of sets (i.e. of any two models of ZFC_2, they are either isomorphic or one is isomorphic to an initial segment of the other).
ii) although there is a minority view to the contrary, it’s typically thought that going second order doesn’t help with determinateness worries (i.e. roughly what you are talking about with regard to “pinning down” the natural numbers). The point here is that going second order only works if you interpret the second order quantifiers “fully”, i.e. as ranging over the whole power set of the domain rather than some proper subset of it. But the problem is: how can we rule out non-full interpretations of the quantifiers? This seems like just the same sort of problem as ruling out non-standard models of arithmetic (“the same sort”, not the same, because for the reasons mentioned in (i) it is actually more stringent of a condition.) The point is if you for some reason doubt that we have a categorical grasp of the natural numbers, you are certainly not going to grant that we can enforce a full interpretation of the second order quantifiers. And although it seems intuitively obvious that we have a categorical grasp of the natural numbers, careful consideration of the first incompleteness theorem shows that this is by no means clear.
iii) Given that categoricity results are only up to isomorphism, I don’t see how they help you pin down talk of the natural numbers themselves (as opposed to any old omega_sequence). At best, they help you pin down the structure of the natural numbers, but taking this insight into account is easier said than done.
Generally, things being identical up to isomorphism is considered to make them the same thing in all senses that matter. If something has all the same properties as the natural numbers, in every respect and every particular, then that’s no different from merely changing the names. This is a pretty basic mathematical concept, and that you aren’t familiar with it makes me question the rest of this comment as well.
I think philosophers who think that the categoricity of second-order Peano arithmetic allows us to refer to the natural numbers uniquely tend to also reject the causal theory of reference, precisely because the causal theory of reference is usually put as requiring all reference to be causally guided. Among those, lots of people more-or-less think that references can be fixed by some kinds of description, and I think logical descriptions of this kind would be pretty uncontroversial.
OTOH, for some reason everyone in philosophy of maths is allergic to second-order logic (blame Quine), so the categoricity argument doesn’t always hold water. For some discussion, there’s a section in the SEP entry on Philosophy of Mathematics.
(To give one of the reasons why people don’t like SOL: to interpret it fully you seem to need set theory. Properties basically behave like sets, and so you can make SOL statements that are valid iff the Continuum Hypothesis is true, for example. It seems wrong that logic should depend on set theory in this way.)
This is a facepalm “Duh” moment, I hear this criticism all the time but it does not mean that “logic” depends on “set theory”. There is a confusion here between what can be STATED and what can be KNOWN. The criticism only has any force if you think that all “logical truths” ought to be recognizable so that they can be effectively enumerated. But the critics don’t mind that for any effective enumeration of theorems of arithmetic, there are true statements about integers that won’t be included—we can’t KNOW all the true facts about integers, so the criticism of second-order logic boils down to saying that you don’t like using the word “logic” to be applied to any system powerful enough to EXPRESS quantified statements about the integers, but only to systems weak enough that all their consequences can be enumerated.
This demand is unreasonable. Even if logic is only about “correct reasoning”, the usual framework given by SOL does not presume any dubious principles of reasoning and ZF proves its consistency. The existence of propositions which are not deductively settled by that framework but which can be given mathematical interpretations means nothing more than that our repertoire of “techniques of correct reasoning”, which has grown over the centuries, isn’t necessarily finalized.
The Abstract Algebra course I took presented it in this fashion. I have a hard time seeing how you could even have abstract algebra without this notion.
What about Steven Landsburg’s frequent crowing on the Platonicity of math and how numbers are real because we can “directly perceive them”? How does this relate to it?
EDIT: Well, he replies here.
I was wondering what he thought about this!
While I greatly sympathize with the “Platonicity of math”, I can’t shake the idea that my reasoning about numbers isn’t any kind of direct perception, but just reasoning about an in-memory representation of a model that is ultimately based on all the other systems that behave like numbers.
I find the arguments about how not all true statements regarding the natural numbers can be inferred via first-order logic tedious. It doesn’t seem like our understanding of the natural numbers is particularly impoverished because of it.
I remember explaining the Axiom of Choice in this way to a fellow undergraduate on my integration theory course in late 2000. But of course it never occurred to me to write it down, so you only have my word for this :-)
This post definitely deserves a lot of credit.
If memory serves, Hofstadter uses roughly this explanation in GEB.
This is pretty close to how I remember the discussion in GEB. He has a good discussion of non-Euclidean geometry. He emphasizes that originally the negation of Parallel Postulate was viewed as absurd, but that now we can understand that the non-Euclidean axioms are perfectly reasonable statements which describe something other than plane geometry we are used to. Later he has a bit of a discussion of what a model of PA + NOT(CON(PA)) would look like. I remember finding it pretty confusing, and I didn’t really know what he was getting at until I red some actual logic theory textbooks. But he did get across the idea that the axioms would still describe something, but that something would be larger and stranger than the integers we think we know.
???
IRC, Hofstadter is a firm formalist, and I don’t see how that square with EYs apparent Correspondence Theory. At least i don’t see the point in correspondence if hat is being corresponded to is itself generated by axioms.