First-order logic can’t distinguish between different sizes of infinity. Any finite or countable set of first-order statements with an infinite model has models of all sizes.
However, if you take second-order logic at face value, it’s actually quite easy to uniquely specify the integers up to isomorphism. The price of this is that second-order logic is not complete—the full set of semantic implications, the theorems which follow, can’t be derived by any finite set of syntactic rules.
So if you can use second-order statements—and if you can’t, it’s not clear how we can possibly talk about the integers—then the structure of integers, the subject matter of integers, can be compactly singled out by a small set of finite axioms. However, the implications of these axioms cannot all be printed out by any finite Turing machine.
Appropriately defined, you could state this as “finitely complex premises can yield infinitely complex conclusions” provided that the finite complexity of the premises is measured by the size of the Turing machine which prints out the axioms, yielding is defined as semantic implication (that which is true in all models of which the axioms are true), and the infinite complexity of the conclusions is defined by the nonexistence of any finite Turing machine which prints them all.
However this is not at all the sort of thing that Dawkins is talking about when he talks about evolution starting simple and yielding complexity. That’s a different sense of complexity and a different sense of yielding.
First-order logic can’t distinguish between different sizes of infinity. Any finite or countable set of first-order statements with an infinite model has models of all sizes.
However, if you take second-order logic at face value, it’s actually quite easy to uniquely specify the integers up to isomorphism. The price of this is that second-order logic is not complete—the full set of semantic implications, the theorems which follow, can’t be derived by any finite set of syntactic rules.
So if you can use second-order statements—and if you can’t, it’s not clear how we can possibly talk about the integers—then the structure of integers, the subject matter of integers, can be compactly singled out by a small set of finite axioms. However, the implications of these axioms cannot all be printed out by any finite Turing machine.
Appropriately defined, you could state this as “finitely complex premises can yield infinitely complex conclusions” provided that the finite complexity of the premises is measured by the size of the Turing machine which prints out the axioms, yielding is defined as semantic implication (that which is true in all models of which the axioms are true), and the infinite complexity of the conclusions is defined by the nonexistence of any finite Turing machine which prints them all.
However this is not at all the sort of thing that Dawkins is talking about when he talks about evolution starting simple and yielding complexity. That’s a different sense of complexity and a different sense of yielding.
That makes more sense, thanks.
Any recommended reading on this sort of thing?