Of course we can explain it to a machine, just as we explain it to a person. By using second-order concepts (like “smallest set of thingies closed under zero and successor”).
Of course then we need to leave some aspects of those second-order concepts unexplained and ambiguous—for both machines and humans.
By ‘ambiguous’, I meant to suggest the existence of multiple non-isomorphic models.
The thing that puzzled cousin_it was that the axioms of first-order Peano arithmetic can be satisfied by non-standard models of arithmetic, and that there is no way to add additional first-order axioms to exclude these unwanted models.
The solution I proposed was to use a second-order axiom of induction—working with properties (i.e.sets) of numbers rather than the first-order predicates over numbers. This approach successfully excludes all the non-standard models of arithmetic, leaving only the desired standard model of cardinality aleph nought. But it extends the domain of discourse from simply numbers to both numbers and sets of numbers. And now we are left with the ambiguity of what model of sets of numbers we want to use.
It is mildly amusing that in the case of arithmetic, the unwanted non-standard models were all too big, but in the case of set theory, people seem to prefer to think of the large models as standard and dismiss Godel’s constructive set theory as an aberation.
Depends what you mean by ‘large’ I suppose. A non-well founded model of ZFC is ‘larger’ than the well-founded submodel it contains (in the sense that it properly contains its well-founded submodel), but it certainly isn’t “standard”.
By Gödel’s constructive set theory are you talking about set theory plus the axiom of constructibility (V=L)? V=L is hardly ‘dismissed as an aberration’ any more than the field axioms are an ‘aberration’ but just as there’s more scope for a ‘theory of rings’ than a ‘theory of fields’, so adding V=L as an axiom (and making a methodological decision to refrain from exploring universes where it fails) has the effect of truncating the hierarchy of large cardinals. Everything above zero-sharp becomes inconsistent.
Furthermore, the picture of L sitting inside V that emerges from the study of zero-sharp is so stark and clear that set theorists will never want to let it go. (“No one will drive us from the paradise which Jack Silver has created for us”.)
Of course we can explain it to a machine, just as we explain it to a person. By using second-order concepts (like “smallest set of thingies closed under zero and successor”). Of course then we need to leave some aspects of those second-order concepts unexplained and ambiguous—for both machines and humans.
I don’t understand what you’re referring to in your second sentence. Can you elaborate? What sorts of things need to be ambiguous?
By ‘ambiguous’, I meant to suggest the existence of multiple non-isomorphic models.
The thing that puzzled cousin_it was that the axioms of first-order Peano arithmetic can be satisfied by non-standard models of arithmetic, and that there is no way to add additional first-order axioms to exclude these unwanted models.
The solution I proposed was to use a second-order axiom of induction—working with properties (i.e.sets) of numbers rather than the first-order predicates over numbers. This approach successfully excludes all the non-standard models of arithmetic, leaving only the desired standard model of cardinality aleph nought. But it extends the domain of discourse from simply numbers to both numbers and sets of numbers. And now we are left with the ambiguity of what model of sets of numbers we want to use.
It is mildly amusing that in the case of arithmetic, the unwanted non-standard models were all too big, but in the case of set theory, people seem to prefer to think of the large models as standard and dismiss Godel’s constructive set theory as an aberation.
Depends what you mean by ‘large’ I suppose. A non-well founded model of ZFC is ‘larger’ than the well-founded submodel it contains (in the sense that it properly contains its well-founded submodel), but it certainly isn’t “standard”.
By Gödel’s constructive set theory are you talking about set theory plus the axiom of constructibility (V=L)? V=L is hardly ‘dismissed as an aberration’ any more than the field axioms are an ‘aberration’ but just as there’s more scope for a ‘theory of rings’ than a ‘theory of fields’, so adding V=L as an axiom (and making a methodological decision to refrain from exploring universes where it fails) has the effect of truncating the hierarchy of large cardinals. Everything above zero-sharp becomes inconsistent.
Furthermore, the picture of L sitting inside V that emerges from the study of zero-sharp is so stark and clear that set theorists will never want to let it go. (“No one will drive us from the paradise which Jack Silver has created for us”.)
Thank you for the clarification.