What’s a “natural number”?

A big problem with natural numbers is that the axiomatic method breaks on them.

Mystery #1: if we’re allowed to talk about sets of natural numbers, sets of these sets, etc., then some natural-sounding statements are neither provable nor disprovable (“independent”) from all the “natural” axiomatic systems we’ve invented yet. For example, the continuum hypothesis can be reformulated as a statement about sets of sets of natural numbers. The root cause is that we can’t completely axiomatize which sets of natural numbers exist, because there’s too many of them. That’s the substantial difference between second-order logic and first-order logic; logicians say that second-order logic is “defined semantically”, not by any syntactic procedure of inference.

Mystery #2: if we’re allowed to talk about arithmetic and use quantifiers (exists and forall) over numbers, but not over sets of them—in other words, use first-order logic only—then some natural-sounding statements appear to be true, but to prove them, we need to accept as axioms a lot of intuition about concepts other than natural numbers. For example, Goodstein’s theorem is a simple arithmetical statement that cannot be proved in Peano arithmetic, but can be proved in “stronger theories”. This means the theorem is a consequence of our intuition that some “stronger theory”, e.g. ZFC, is consistent—but where did that intuition come from? It doesn’t seem to be talking about natural numbers anymore.

Can we teach a computer to think about natural numbers the same way we do, that is, somehow non-axiomatically? Not just treat numbers as opaque “things” that obey the axioms of PA—that would make a lot of true theorems unreachable! This seems to be the simplest AI-hard problem that I’ve ever seen.