2) A quick search of Google Scholar didn’t net me a Chaitin definition of K-complexity for a structure. This doesn’t surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There’s also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn’t be all that shocking in the first place if we couldn’t specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don’t think that Dawkins is talking about K-complexity, exactly. If that’s all you’re talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there’s a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I’d venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it’s unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that “exist”. Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it’s clear that there is some perceptible difference involved. It doesn’t seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins “wins”, at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.
Thanks again for bringing insight and sanity to this discussion. A few points:
1) Your description of the structure N presupposes some knowledge of the structure N; the program that prints out the structure needs a first statement, a second statement, etc. This is, of course, unavoidable, and it’s therefore not a complaint; I doubt that there’s any way to give a formal description of the natural numbers without presupposing some informal understanding of the natural numbers. But what it does mean, I think, is that K-complexity (in the sense that you’re using it) is surely the wrong measure of complexity here—because when you say that N has low K-complexity, what you’re really saying is that “N is easy to describe provided you already know something about N”. What we really want to know is how much complexity is imbedded in that prior knowledge.
1A) On the other hand, I’m not clear on how much of the structure of N is necessarily assumed in any formal description, so my point 1) might be weaker than I’ve made it out to be.
2) It has been my position all along that K-complexity is largely a red herring here in the sense that it need not capture Dawkins’s meaning. Your observation that a pot of boiling water is more K-complex than a squirrel speaks directly to this point, and I will probably steal it for use in future discussions.
3) When you talk about T(N), I presume you mean the language of Peano arithmetic, together with the set of all true statements in that language. (Correct me if I’m wrong.) I would hesitate to call this a theory, because it’s not recursively axiomatizable, but that’s a quibble. In any event, we do know what we mean by T(N), but we don’t know what we mean by T(squirrel) until we specify a language for talking about squirrels—a set of constant symbols corresponding to tail, head, etc., or one for each atom, or....., and various relations, etc. So T(N) is well defined, while T(squirrel) is not. But whatever language you settle on, a squirrel is still going to be a finite structure, so T(squirrel) is not going to share the “wild nonrecursiveness” of T(N) (which is closely related to the difficulty of giving an extrinsic characterization). That seems to me to capture a large part of the intuition that the natural numbers are more complex than a squirrel,
4) You are probably right that Dawkins wasn’t thinking about mathematical structures when he made his argument. But because he does claim that his argument applies to complexity in general, not just to specific instances, he’s stuck (I think) either accepting applications he hadn’t thought about or backing off the generality of his claim. It’s of course hard to know exactly what he meant by complexity, but it’s hard for me to imagine any possible meaning consistent with Dawkins’s usage that doesn’t make arithmetic (literally) infinitely more complex than a squirrel.
5) Thanks for trying to explain to Silas that he doesn’t understand the difference between a structure and an axiomatic system. I’ve tried explaining it to him in many ways, at many times, in many forums, but have failed to make any headway. Maybe you’ll have better luck.
6) If any of this seems wrong to you, I’ll be glad to be set straight.
Replying out of order:
2) A quick search of Google Scholar didn’t net me a Chaitin definition of K-complexity for a structure. This doesn’t surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There’s also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn’t be all that shocking in the first place if we couldn’t specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don’t think that Dawkins is talking about K-complexity, exactly. If that’s all you’re talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there’s a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I’d venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it’s unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that “exist”. Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it’s clear that there is some perceptible difference involved. It doesn’t seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins “wins”, at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.
Relevant link: http://lesswrong.com/lw/vh/complexity_and_intelligence/
Splat:
Thanks again for bringing insight and sanity to this discussion. A few points:
1) Your description of the structure N presupposes some knowledge of the structure N; the program that prints out the structure needs a first statement, a second statement, etc. This is, of course, unavoidable, and it’s therefore not a complaint; I doubt that there’s any way to give a formal description of the natural numbers without presupposing some informal understanding of the natural numbers. But what it does mean, I think, is that K-complexity (in the sense that you’re using it) is surely the wrong measure of complexity here—because when you say that N has low K-complexity, what you’re really saying is that “N is easy to describe provided you already know something about N”. What we really want to know is how much complexity is imbedded in that prior knowledge.
1A) On the other hand, I’m not clear on how much of the structure of N is necessarily assumed in any formal description, so my point 1) might be weaker than I’ve made it out to be.
2) It has been my position all along that K-complexity is largely a red herring here in the sense that it need not capture Dawkins’s meaning. Your observation that a pot of boiling water is more K-complex than a squirrel speaks directly to this point, and I will probably steal it for use in future discussions.
3) When you talk about T(N), I presume you mean the language of Peano arithmetic, together with the set of all true statements in that language. (Correct me if I’m wrong.) I would hesitate to call this a theory, because it’s not recursively axiomatizable, but that’s a quibble. In any event, we do know what we mean by T(N), but we don’t know what we mean by T(squirrel) until we specify a language for talking about squirrels—a set of constant symbols corresponding to tail, head, etc., or one for each atom, or....., and various relations, etc. So T(N) is well defined, while T(squirrel) is not. But whatever language you settle on, a squirrel is still going to be a finite structure, so T(squirrel) is not going to share the “wild nonrecursiveness” of T(N) (which is closely related to the difficulty of giving an extrinsic characterization). That seems to me to capture a large part of the intuition that the natural numbers are more complex than a squirrel,
4) You are probably right that Dawkins wasn’t thinking about mathematical structures when he made his argument. But because he does claim that his argument applies to complexity in general, not just to specific instances, he’s stuck (I think) either accepting applications he hadn’t thought about or backing off the generality of his claim. It’s of course hard to know exactly what he meant by complexity, but it’s hard for me to imagine any possible meaning consistent with Dawkins’s usage that doesn’t make arithmetic (literally) infinitely more complex than a squirrel.
5) Thanks for trying to explain to Silas that he doesn’t understand the difference between a structure and an axiomatic system. I’ve tried explaining it to him in many ways, at many times, in many forums, but have failed to make any headway. Maybe you’ll have better luck.
6) If any of this seems wrong to you, I’ll be glad to be set straight.