Landsburg does not doubt biological evolution. It’s just an argument about complexity being inherent in the laws of nature, reality. And what it has to do with rationality, it’s thought provoking. And rationality is a means to an end in succeeding to reach your goals. If your goal is to fathom the nature of reality these thoughts are valid as they add to the pile of possibilities being worthy of consideration in this regard.
I’m not sure “thought-provoking” is actually a good thing any more than “reflectively coherent” is a good thing. “Thought-provoking” is just a promise of future benefit from understanding; that promise is often broken.
Then what are Dawkins and his opponents “equally wrong” about? What does it mean to say that complexity is “inherent in the laws of nature”? Or that it isn’t? What does Landsburg mean by “complexity”? Is arithmetic “complex” because it contains deep truths, or is it “simple” because it can be captured in a small set of axioms?
I have yet to understand what is being claimed here.
Ariithmetic is complex because it can not be captured in a small set of axioms. More precisely, it cannot be specified by any (small or large) set of axioms, because any set of (true) axioms about arithmetic applies equally well to other structures that are not arithmetic. Your favorite set of axioms fails to specify arithmetic in the same way that the statement “bricks are rectangular” fails to specify bricks; there are lots of other things that are also rectangular.
This is not true, for example, of euclidean geometry, which can be specified by a set of axioms.
Silas Barta’s remarks notwithstanding, the question of which truths we can know has nothing to do with this; we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.
Arithmetic is complex because it can not be captured in a small set of axioms.
Then the universe doesn’t use that arithmetic in implementing physics, and it doesn’t have the significance you claim it does. Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms. And like I said in our many exchanges, it’s true that modern computers can’t answer every question about the natural numbers, but they don’t need to. Neither does the universe.
Your favorite set of axioms fails to specify arithmetic in the same way that the statement “bricks are rectangular” fails to specify bricks; there are lots of other things that are also rectangular.
Yes, but you only need finite space to specify bricks well enough to get the desired functionality of bricks. Your argument would imply that bricks are infinitely complex because we don’t have a finite procedure for determining where an arbitrary object “really” is a brick, because of e.g. all the borderline cases. (“Do the stones in a stone wall count as bricks?”)
Then the universe doesn’t use that arithmetic in implementing physics,
How do you know?
Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms.
What kind of arithmetic is that? It would have to be a kind of arithmetic to which Godel’s and Tarski’s theorems don’t apply, so it must be very different indeed from any arithmetic I’ve ever heard of.
Then the universe doesn’t use that arithmetic in implementing physics,
How do you know?
Mainly from the computability of the laws of physics.
Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms.
What kind of arithmetic is that? It would have to be a kind of arithmetic to which Godel’s and Tarski’s theorems don’t apply, so it must be very different indeed from any arithmetic I’ve ever heard of.
Right—meaning the universe doesn’t use arithmetic (as you’ve defined it). You’re getting tripped up on the symbol “arithmetic”, for which you keep shifting meanings. Just focus on the substance of what you mean by arithmetic: Does the universe need that to work? No, it does not. Do computers need to completely specify that arithmetic to work? No, they do not.
By the way:
1) To quote someone here, use the greater-than symbol before the quoted paragraph, as described in the help link below the entry field for a comment.
2) One should be cautious about modding down someone one is a direct argument with, as that tends to compromise one’s judgment. I have not voted you down, though if I were a bystander to this, I would.
First—I have never shifted meanings on the definition of arithmetic. Arithmetic means the standard model of the natural numbers. I believe I’ve been quite consistent about this.
Second—as I’ve said many times, I believe that the most plausible candidates for the “fabric of the Universe” are mathematical structures like arithmetic. And as I’ve said many times, obviously I can’t prove this. The best I can do is explain why I find it so plausible, which I’ve tried to do in my book. If those arguments don’t move you, well, so be it. I’ve never claimed they were definitive.
Third—you seem to think (unless I’ve misread you) that this vision of the Universe is crucial to my point about Dawkins. It’s not.
Fourth—Here is my point about Dawkins; it would be helpful to know which part(s) you consider the locus of our disagreement:
a) the natural numbers—whether or not you buy my vision of them as the basis of reality—are highly complex by any reasonable definition (I am talking here about the actual standard model of the natural numbers, not some axiomatic system that partly describes them);
b) Dawkins has said, repeatedly, that all complexity—not just physical complexity, not just biological complexity, but all complexity—must evolve from something simpler. And indeed, his argument needs this statement in all its generality, because his argument makes no special assumption that would restrict us to physics or biology. It’s an argument about the nature of complexity itself.
c) Therefore, if we buy Dawkins’s argument, we must conclude that the natural numbers evolved from something simpler.
d) The natural numbers did not evolve from something simpler. Therefore Dawkins’s argument can’t be right.
It seems to me that the definition of complexity is the root of any disagreement here. It seems obvious to me that the natural numbers are not complex in the sense that a human being is complex. I don’t understand what kind of complexity you could be talking about that places natural numbers on an equivalent footing with, say, the entire ecosystem of the planet Earth.
Contrary to what SteveLandsburg says in his reply, I think you are exactly right. And this is how our disagreement originally started, by me explaining why he’s wrong about complexity.
Scientists use math to compress our description of the universe. It wouldn’t make much sense to use something infinitely complex for data compression!
So, to the extent he’s talking about math or arithmetic in a way that does have such complexity, he’s talking about something that isn’t particularly relevant to our universe.
I think the system of natural numbers is pretty damn complex. But the system of natural numbers is an abstract object and Dawkins likely never meant for his argument to apply to abstract objects, thinks all abstract objects are constructed by intelligences or denies the existence of abstract objects.
I think there is a good chance all abstract objects are constructed and a better chance that the system of natural numbers was constructed (or at least the system, when construed as an object and not a structural analog, is constructed and not discovered. That is numbers are more like adjectives then nouns, adjectives aren’t objects.)
mattnewport: This would seem to put you in the opposite corner from Silas, who thinks (if I read him correctly) that all of physical reality is computably describable, and hence far simpler than arithmetic (in the sense of being describable using only a small and relatively simple fragment of arithmetic).
Be that as it may, I’ve blogged quite a bit about the nature of the complexity of arithmetic (see an old post called “Non-Simple Arithmetic” on my blog). In brief: a) no set of axioms suffices to specify the standard model of arithmetic (i.e. to distinguish it from other models). And b) we have the subjective reports of mathematicians about the complexity of their subject matter, which I think should be given at least as much weight as the subjective reports of ecologists. (There are a c), d) and e) as well, but in this short comment, I’ll rest my case here.)
Your biggest problem here, and in your blog posts, is that you equivocate between the structure of the standard natural numbers (N) and the theory of that structure (T(N), also known as True Arithmetic). The former is recursive and (a reasonable encoding of) it has pretty low Kolmogorov complexity. The latter is wildly nonrecursive and has infinite K-complexity. (See almost any of Chaitin’s work on algorithmic information theory, especially the Omega papers, for definitions of the K-complexity of a formal system.)
The difference between these two structures comes from the process of translating between them. Once explained properly, it’s almost intuitive to a recursion theorist, or a computer scientist versed in logic, that there’s a computable reduction from any language in the Arithmetic Hierarchy to the language of true statements of True Arithmetic. This implies that going from a description of N to a truth-enumerator or decision procedure for T(N) requires a hypercomputer with an infinite tower of halting, meta-halting, … meta^n-halting … oracles.
However, it so happens that simulating the physical world (or rather, our best physical ‘theories’, which in a mathematical sense are structures, not theories) on a Turing machine does not actually require T(N), only N. We only use theories, as opposed to models, of arithmetic, when we go to actually reason from our description of physics to consequences. And any such reasoning we actually do, just like any pure mathematical reasoning we do, depends only on a finite-complexity fragment of T(N).
Now, how does this make biology more complex than arithmetic? Well, to simulate any biological creature, you need N plus a bunch of biological information, which together has more K-complexity than just N. To REASON about the biological creature, at any particular level of enlightenment, requires some finite fragment of T(N), plus that extra biological information. To enumerate all true statements about the creature (including deeply-alternating quantified statements about its counterfactual behaviour in every possible circumstance), you require the infinite information in T(N), plus, again, that extra biological information. (In the last case it’s of course rather problematic to say there’s more complexity there, but there’s certainly at least as much.)
Note that I didn’t know all this this morning until I read your blog argument with Silas and Snorri; I thank all three of you for a discussion that greatly clarified my grasp on the levels of abstraction in play here.
(This morning I would have argued strongly against your Platonism as well; tonight I’m not so sure...)
Splat: Thanks for this; it’s enlightening and useful.
The part I’m not convinced of this:
to simulate any biological creature, you need N plus a bunch of biological information
A squirrel is a finite structure; it can be specified by a sequence of A’s, C’s, G’s and T’s, plus some rules for protein synthesis and a finite number of other facts about chemistry. (Or if you think that leaves something out, it can be described by the interactions among a large but finite collection of atoms.) So I don’t see where we need all of N to simulate a squirrel.
Well, if you need to simulate a squirrel for just a little while, and not for unbounded lengths of time, a substructure of N (without closure under the operations) or a structure with a considerable amount of sharing with N (like 64-bit integers on a computer) could suffice for your simulation.
The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself. (How large this size is depends on which abstract computer you used to define your instance of K-complexity, but the asymptotic trend is unavoidable.)
If this seems paradoxical, consider that after a while the shortest computer program for generating an open initial segment of N is a computer program for generating all of N plus instructions indicating when to stop.
Either way, it so happens that the biological information you’d need to simulate the squirrel dwarfs N in complexity, so even if you can find a sufficient substitute for N that’s “lightweight” you can’t possibly save enough to make your squirrel simulation less complex than N.
The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself.
This depends on what you mean by “specify”. To distinguish N from other mathematical structures requires either an infinite (indeed non-recursive) amount of information or a second order specification including some phrase like “all predicates”. Are you referring to the latter? Or to something else I don’t know about?
2) I do not know Chaitin’s definition of the K-complexity of a structure. I’ll try tracking it down, though if it’s easy for you to post a quick definition, I’ll be grateful. (I do think I know how to define the K-complexity of a theory.) I presume that if I knew this, I’d know your answer to question 1).
3) Whatever the definition, the question remains whether K-complexity is the right concept here. Dawkins’s argument does not define complexity; he treats it as “we know it when we see it”. My assertion has been that Dawkins’s argument applies in a context where it leads to an incorrect conclusion, and therefore can’t be right. To make this argument, I need to use Dawkins’s intended notion of complexity, which might not be the same as Chaitin’s or Kolmogorov’s. And for this, the best I can do is to infer from context what Dawkins does and does not see as complex. (It is, clear from context that he sees complexity as a general phenomenon, not just a biological one.)
4) The natural numbers are certainly an extremely complex structure in the everyday sense of the word; after thousands of years of study, people are learning new and surprising things about them every day, and there is no expectation that we’ve even scratched the surface. This is, of course, a manifestation of the “wildly nonrecursive” nature of T(N), all of which is reflected in N itself. And this, again, seems pretty close to the way Dawkins uses the word.
5) I continue to be most grateful for your input. I see that SIlas is back to insisting that you can’t simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody’s asserting any such thing; my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system. Whether or not you agree, it’s a pleasure to engage with someone who’s not obsessed with pummelling straw men.
2) A quick search of Google Scholar didn’t net me a Chaitin definition of K-complexity for a structure. This doesn’t surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There’s also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn’t be all that shocking in the first place if we couldn’t specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don’t think that Dawkins is talking about K-complexity, exactly. If that’s all you’re talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there’s a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I’d venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it’s unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that “exist”. Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it’s clear that there is some perceptible difference involved. It doesn’t seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins “wins”, at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.
Thanks again for bringing insight and sanity to this discussion. A few points:
1) Your description of the structure N presupposes some knowledge of the structure N; the program that prints out the structure needs a first statement, a second statement, etc. This is, of course, unavoidable, and it’s therefore not a complaint; I doubt that there’s any way to give a formal description of the natural numbers without presupposing some informal understanding of the natural numbers. But what it does mean, I think, is that K-complexity (in the sense that you’re using it) is surely the wrong measure of complexity here—because when you say that N has low K-complexity, what you’re really saying is that “N is easy to describe provided you already know something about N”. What we really want to know is how much complexity is imbedded in that prior knowledge.
1A) On the other hand, I’m not clear on how much of the structure of N is necessarily assumed in any formal description, so my point 1) might be weaker than I’ve made it out to be.
2) It has been my position all along that K-complexity is largely a red herring here in the sense that it need not capture Dawkins’s meaning. Your observation that a pot of boiling water is more K-complex than a squirrel speaks directly to this point, and I will probably steal it for use in future discussions.
3) When you talk about T(N), I presume you mean the language of Peano arithmetic, together with the set of all true statements in that language. (Correct me if I’m wrong.) I would hesitate to call this a theory, because it’s not recursively axiomatizable, but that’s a quibble. In any event, we do know what we mean by T(N), but we don’t know what we mean by T(squirrel) until we specify a language for talking about squirrels—a set of constant symbols corresponding to tail, head, etc., or one for each atom, or....., and various relations, etc. So T(N) is well defined, while T(squirrel) is not. But whatever language you settle on, a squirrel is still going to be a finite structure, so T(squirrel) is not going to share the “wild nonrecursiveness” of T(N) (which is closely related to the difficulty of giving an extrinsic characterization). That seems to me to capture a large part of the intuition that the natural numbers are more complex than a squirrel,
4) You are probably right that Dawkins wasn’t thinking about mathematical structures when he made his argument. But because he does claim that his argument applies to complexity in general, not just to specific instances, he’s stuck (I think) either accepting applications he hadn’t thought about or backing off the generality of his claim. It’s of course hard to know exactly what he meant by complexity, but it’s hard for me to imagine any possible meaning consistent with Dawkins’s usage that doesn’t make arithmetic (literally) infinitely more complex than a squirrel.
5) Thanks for trying to explain to Silas that he doesn’t understand the difference between a structure and an axiomatic system. I’ve tried explaining it to him in many ways, at many times, in many forums, but have failed to make any headway. Maybe you’ll have better luck.
6) If any of this seems wrong to you, I’ll be glad to be set straight.
Whatever the definition, the question remains whether K-complexity is the right concept here. Dawkins’s argument does not define complexity; he treats it as “we know it when we see it”. My assertion has been that Dawkins’s argument applies in a context where it leads to an incorrect conclusion, and therefore can’t be right. To make this argument, I need to use Dawkins’s intended notion of complexity, which might not be the same as Chaitin’s or Kolmogorov’s. And for this, the best I can do is to infer from context what Dawkins does and does not see as complex.
1) Unless they say otherwise, you should assume someone is using the standard meanings for the terms they use, which would mean Dawkins is using the intuitive definition, which closely parallels K-complexity.
2) If you’re going to write a book hundreds of pages long in which you crucially rely on the concept of complexity, you need to explicitly to define it. That’s just how it works. If you know what concept of complexity is “the” right one here, you need to spell it out yourself.
3) Most importantly, you have shown Dawkins’s argument to be in error in the context of an immaterial realm that is not observable and does not interact with this universe. Surely, you can think of some reason why Dawkins doesn’t intend to refer to such realms, can’t you? (Hint: Dawkins is an atheist, materialist, and naturalist—just like you, in other words, until it comes to the issue of math.)
ETA: If any followers of this exchange think I’m somehow not getting someting, or being unfair to SteveLandsburg, please let me know, either as a reply in the thread or a PM, whether or not you use your normal handle.
If you’re going to write a book hundreds of pages long in which you crucially rely on the concept of complexity, you need to explicitly to define it. That’s just how it works. If you know what concept of complexity is “the” right one here, you need to spell it out yourself.
Well, Silas, what I actually did was write a book 255 pages long of which this whole Dawkins/complexity thing occupies about five pages (29-34) and where complexity is touched on exactly once more, in a brief passage on pages 7-8. From the discrepancy between your description and reality, I infer that you haven’t read the book, which would help to explain why your comments are so bizarrely misdirected.
Oh, and I see that you’re still going on about axiomatic descriptions of squirrels, as if that were relevant to something I’d said. (Hint: A simulation is not an axiomatic system. That’s 48 bajillion and one.)
Well, Silas, what I actually did was write a book 255 pages long of which this whole Dawkins/complexity thing occupies about five pages (29-34) and where complexity is touched on exactly once more, in a brief passage on pages 7-8. From the discrepancy between your description and reality, I infer that you haven’t read the book, which would help to explain why your comments are so bizarrely misdirected.
I have not read the entire book. I have read many long portions of it, mostly the philosophical ones and those dealing with physics. I was drawn to on the assumption that, surely you would have defined complexity in your exposition!
It’s misleading to say that your usage of complexity only takes 8 pages, so it’s insignificant. Rather, the point you make about complexity is your grounding for broader claims about the role mathematics plays in the universe, which you come back to frequently. The explicit mention of the term “complexity” is thus a poor measure of how much you rely on it.
But even if it were just 8 pages, you should still have defined it, and you should still not expect to have achieved insights on the topic, given that you haven’t defined it.
(I certainly wouldn’t want to buy it—why should I subsidize such confused thinking? I don’t even like your defenses of libertarianism, despite being libertarian.)
Oh, and I see that you’re still going on about axiomatic descriptions of squirrels, as if that were relevant to something I’d said. (Hint: A simulation is not an axiomatic system. That’s 48 bajillion and one.)
Ah, another suddenly-crucial distinction to make, so you can wiggle out of being wrong!
I should probably use this opportunity to both show I did read many portions, and show why Landsburg doesn’t get what it means to really explain something. His explanation of the Heisenberg Uncertainty Principle (which gets widely praised as a good explanation for some reason) is this: think of an electron as moving in a circle within a square. If you measure its vertical position, its closeness to the top determines the chance of getting a “top” or “bottom” reading.
Likewise the horizontal direction: if you measure the horizontal position of the electron, your chances of getting a “left” or “right” reading depends on how far it is from that side.
And for the important part: why can’t you measure both at the same time? Landsburg’s brilliant explanation: um, because you can’t.
But that’s what the explanation was supposed to demystify in the first place! You can’t demystify by feeding that very mystery a blackbox fact unto itself. To explain it, you would need to explain enough of the dynamics of quantum systems so that, at the end, your reader doesn’t view precise measurement of both position and momentum as even being coherent! Saying, “oh, you can’t because you can’t” isn’t an explanation.
I see that SIlas is back to insisting that you can’t simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody’s asserting any such thing;
I didn’t say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it’s not just the axioms that suffice to let you use arithmetic. It’s those, plus biological information about squirrels. But this arithmetic is not the infinitely complex arithmetic you talk about in other contexts!
my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system.
You can’t—you need axioms beyond those that specify N. The fact that the biological model involving those axioms uses math, doesn’t mean you’ve described it once you’ve described the structure N. So whether or not you call that “simulating it in the structure N”, it’s certainly more complex than just N.
I’m responding here to your invitation in the parent, since this post provides some good examples of what you’re not getting.
I didn’t say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it’s not just the axioms that suffice to let you use arithmetic.
Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms. The best way to imagine this in the case of arithmetic is in terms of a structure.
Starting from the definition in that wikipedia page, you can imagine giving the graphs of the universe and functions and relations as Datalog terms. (Using terms instead of tuples keeps the graphs disjoint, which will be important later.) Like so:
Then you use some simple recursive coding of datalog terms as binary. What you’re left with is just a big (infinite) set of binary strings. The Kolmogorov complexity of the structure N, then (the thing you need to use arithmetic) is the size of the shortest program that enumerates the set, which is actually very small.
Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). To actually run them you just need N (the structure).
The fact that you don’t seem to understand this point yet leads me to believe you were being a little unfair when you said:
By the way, I really hope your remark about Splat’s comment being “enlightening” was just politeness, and that you didn’t actually mean it. Because if you did, that would mean you’re just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic’s relation to the universe.
Now, if you want to complete the comparison, imagine you’re creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like:
The squirrel states, though, will not be described by a couple of words like that, but by incredibly detailed descriptions of the squirrel’s internal state—what shape all its cells are, where all the mRNAs are on their way to the ribosomes, etc. The structure you come up with will take a much bigger program to enumerate than N will. (And I know you already agree with the conclusion here, but making the correct parallel matters.)
Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms.
I wasn’t careful to distinguish axioms from other kinds of information in the model, and I think it’s a distraction to do so because it’s just an issue of labels (which as you probably saw from the discussion is a major source of confusion). My focus was on tabulating the total complexity of whatever-is-being-claimed-is-significant. For that, you only need to count up how much information goes into your “message” describing the data (in the “Minimum Message Length criterion” sense of “message”). Anything in such a message can be described without loss of generality as an axiom.
If I want to describe squirrels, I will find, like most scientists find, that the job is much easier of I can express things using arithmetic. Arithmetic is so helpful that, even after accounting for the cost of telling you how to use it (the axioms-or-whatever of math), I still save in total message length. Whether you call the squirrel info I gathered from nature, or the specification of math, the “axioms” doesn’t matter.
...which is actually very small. Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). …
But it’s not the same arithmetic SteveLandsburg is talking about, if you follow through to the implications he claims fall out from it. He claims arithmetic—the infinitely complex one—runs the universe. It doesn’t. The universe only requires the short message specifying N, plus the (finite) particulars of the universe. Whatever infinitely-complex thing he’s talking about from a “different level of description” isn’t the same thing, and can’t be the same thing.
What’s more, the universe can’t contain that thing because there is no (computable) isomorphism between it and the universe. As we derive the results of longer and longer chains of reasoning, our universe starts to contain more and more complex pieces of that thing, but it still wouldn’t be somehow fundamental to the universe’s operation—not if we’re just now getting to contain pieces of it.
The fact that you don’t seem to understand this point yet leads me to believe you were being a little unfair when you said: … Now, if you want to complete the comparison, imagine you’re creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like: … The squirrel states, though, will not be described by a couple of words like that,
I’m sorry, I don’t see how that contradicts what I said or shows a different parallel. Now, I certainly didn’t use the N vs. T(N) terminology you did, but I clearly explained how there have to be two separate “arithmetics” in play here, as best summarized in my comment here. Whatever infinitely complex arithmetic SteveLandsburg is talking about, isn’t the one that runs the universe. The insights on one don’t apply to the other.
Okay, pretend I’ve given you the axioms sufficient for you to +-*/. Can simulate squirrels now? Of course not. You still have to go out and collect information about squirrels and add it to your description of the axioms of arithmetic (which suffice for all of N) to have a description of squirrels.
You claim that because you can simulate squirrels with (a part of) N, then N suffices to simulate squirrels. But this is like saying that, because you know the encoding method your friend uses to send you messages, you must know the content of all future messages.
That’s wrong, because those are different parts of the compressed data: one part tells you how to decompress, another tells you what you’re decompressing. Knowing how to decompress (i.e., the axioms of N) is different from knowing the string to be decompressed by that method (i.e. the arithmetic symbols encoding squirrels).
By the way, I really hope your remark about Splat’s comment being “enlightening” was just politeness, and that you didn’t actually mean it. Because if you did, that would mean you’re just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic’s relation to the universe.
And much of his comment was a restatement of my point about the difference between the complex arithmetic you refer to, and the arithmetic the universe actually runs on. (I’m not holding my breath for a retraction or a mea culpa or anything, just letting people know what they’re up against here.)
b) we have the subjective reports of mathematicians about the complexity of their subject matter, which I think should be given at least as much weight as the subjective reports of ecologists
Again, this word complexity is used in many ways. Complexity in the sense of humans find this complicatedis a different concept from complexity in the sense of Kolmogorov complexity.
Don’t worry guys, I didn’t let you down. I addressed the issue from the perspective of Kolmogorv complexity in my first blog response. Landsburg initially replied with (I’m paraphrasing), “so what if you became an expert on information theory? That’s not the only meaning of complexity.”
Only later did he try to claim that he also meets the Kolmogorov definition.
(And FWIW, I’m not an expert on information theory—it’s just a hobby. I guess my knowledge just looked impressive to someone...)
no set of axioms suffices to specify the standard model of arithmetic (i.e. to distinguish it from other models).
Then what do you mean when you say “integers”^H^H “natural numbers”, if no set of premises suffices to talk about it as opposed to something else?
Anyway, no countable set of first-order axioms works. But a finite set of second-order axioms work. So to talk about the natural numbers, it suffices merely to think that when you say “Any predicate that is true of zero, and is true of the successor of every number it is true of, is true of all natural numbers” you made sense when you said “any predicate”.
It is this sort of minor-seeming yet important technical inaccuracy that separates “The Big Questions” from “Good and Real”, I’m afraid.
Anyway, no countable set of first-order axioms works. But a finite set of second-order axioms work. So to talk about the integers, it suffices merely to think that when you say “Any predicate that is true of zero, and is true of the successor of every number it is true of, is true of all integers” you made sense when you said “any predicate”.
I think that you have to be careful about claims that second-order logic fixes a unique model. Granted, you can derive the statement “There exists a unique model of the axioms of arithmetic.”
But, for example, what in reality does your “any predicate” quantifier range over? If, under interpretation, it ranges over subsets of the domain of discourse, well, what exactly constitutes a subset? This presumes that you have a model of some set theory in hand. How do you specify which model of set theory you’re using? So far as I know, there’s no way out of this regress.
[ETA: I’m not a logician. I’m definitely open to correction here.]
[ETA2: And now that I read more carefully, you were acknowledging this point when you wrote, “it suffices merely to think that . . . you made sense when you said ‘any predicate’.”
However, you didn’t acknowledge this issue in your earlier comment. I think that it’s too significant an issue to be dismissed with an “it suffices merely...”. When an infinite regress threatens, it doesn’t suffice to push the issue back a level and say “it suffices merely to show that that’s the last level.”]
Sure, and that’s the age-old argument for why we should not take second-order logic at face value. But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
Sure, and that’s the age-old argument for why we should not take second-order logic at face value.
I’m not familiar with the literature of this argument. (It was probably clear from the tentativeness of my comment that I was thinking my own murky way through this issue.)
You seem to take it as the default that we should take second-order logic at face value. (Now that I know what you mean by “face value”, I see that you did acknowledge this issue in your earlier comment.) But I should think that the default would be to be skeptical about this. Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
We don’t. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different ‘standard’, ‘minimal’ models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the ‘weaker’ (because they halt less often) notions of Turing Machines.
Now, I’m not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a ‘true’ Standard Model and if it doesn’t it weighs in favor.
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I’m wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it’s beyond the level to which I’ve read so far.
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation.
Would you give a reference? I found it easy to find assertions such as “the completeness theorem is not constructively provable,” but this statement is a little stronger.
But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
I believe that this claim is based on a defective notion of what it takes to refer to something successfully. The issue that we’re talking about here is a manifestation of that defect. I’m trying to work out a different conception of reference, but it’s very much a work in progress.
mattnewport: This would seem to put you in the opposite corner from Silas, who thinks (if I read him correctly) that all of physical reality is computably describable
No, it wouldn’t—he’s saying basically the same thing I did. The laws of physics are computable. In describing observations, we use concepts from math. The reason we do so is that it allows simpler descriptions of the universe.
Second—as I’ve said many times, I believe that the most plausible candidates for the “fabric of the Universe” are mathematical structures like arithmetic. And as I’ve said many times, obviously I can’t prove this. The best I can do is explain why I find it so plausible, which I’ve tried to do in my book. If those arguments don’t move you, well, so be it. I’ve never claimed they were definitive.
Right, I’ve explained before why your arguments are in error. We can talk more about that some other time.
Third—you seem to think (unless I’ve misread you) that this vision of the Universe is crucial to my point about Dawkins.
No, I accept that they’re separate errors.
Fourth—Here is my point about Dawkins; it would be helpful to know which part(s) you consider the locus of our disagreement:
Okay:
a) the natural numbers—whether or not you buy my vision of them as the basis of reality—are highly complex by any reasonable definition (I am talking here about the actual standard model of the natural numbers, not some axiomatic system that partly describes them);
If what you describe here is what you mean by both “the natural numbers” and “the actual standard model of the natural numbers”, then I will accept this definition for the purposes of argument, but that, using it consistently, it doesn’t have the properties you claim.
b) Dawkins has said, repeatedly, that all complexity—not just physical complexity, not just biological complexity, but all complexity—must evolve from something simpler. And indeed, his argument needs this statement in all its generality, because his argument makes no special assumption that would restrict us to physics or biology. It’s an argument about the nature of complexity itself.
Disagree with this. Dawkins has been referring to existing complexity in the universe and the context of every related statement confirms this. But even accepting it, the rest of your argument still doesn’t follow.
d) The natural numbers did not evolve from something simpler. Therefore Dawkins’s argument can’t be right.
Disagree. Again, let’s keep the same definition throughout. Recall what you said the natural numbers were:
the actual standard model of the natural numbers
The model arose from something simpler (like basic human cognition of counting of objects). The Map Is Not The Territory.
Ah, but now I know what you’re going to say: you meant the sort of Platonic-space model of those natural numbers, that exists independently of whatever’s in our universe, has always been complex.
So, if you assume (like theists) that there’s some sort of really-existing realm, outside of the universe, that always has been, and is complex, then you can prove that … there’s a complexity that has always existed. Which is circular.
Silas: I agree that if arithmetic is a human invention, then my counterexample goes away.
If I’ve read you correctly, you believe that arithmetic is a human invention, and therefore reject the counterexample.
On that reading, a key locus of our disagreement is whether arithmetic is a human invention. I think the answer is clearly no, for reasons I’ve written about so extensively that I’d rather not rehash them here.
I’m not sure, though, that I’ve read you correctly, because you occasionally say things like “The Map Is Not The Territory” which seems to presuppose some sort of platonic Territory. But maybe I just don’t understand what you meant by this phrase.
[Incidentally, it occurs to me that perhaps you are misreading my use of the word “model”. I am using this word in the technical sense that it’s used by logicians, not in any of its everyday senses.]
Less confusing than saying “belief and reality”, “map and territory” reminds us that a map of Texas is not the same thing as Texas itself. Saying “map” also dispenses with possible meanings of “belief” apart from “representations of some part of reality”.
Since our predictions don’t always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called “belief”, the second thingy “reality”.
I agree that if arithmetic is a human invention, then my counterexample goes away.
Then you agree that your “counterexample” amounts to an assumption. If a Platonic realm exists (in some appropriate sense), and if Dawkins was haphazardly including that sense in the universe he is talking about when he describes complexity arising, then he wrong that complexity always comes from simplicity.
If you assume Dawkins is wrong, he’s wrong. Was that supposed to be insightful?
On that reading, a key locus of our disagreement is whether arithmetic is a human invention. I think the answer is clearly no, for reasons I’ve written about so extensively that I’d rather not rehash them here.
It’s a false dispute, though. When you clarify the substance of what these terms mean, there are meanings for which we agree, and meanings for which we don’t. The only error is to refuse to “cash out” the meaning of “arithmetic” into well-defined predictions, but instead keep it boxed up into one ambiguous term, which you do here, and which you did for complexity. (And it’s kind of strange to speak for hundreds of pages about complexity, and then claim insights on it, without stating your definition anywhere.)
One way we’d agree, for example, is if we take your statements about the Platonic realm to be counterfactual claims about phenomena isomorphic to certain mathematic formalisms (as I said at the beginning of the thread).
[Incidentally, it occurs to me that perhaps you are misreading my use of the word “model”. I am using this word in the technical sense that it’s used by logicians, not in any of its everyday senses.]
The definitions aren’t incredibly different, which is why we have the same term for both of them. If you spell out that definition more explicitly, the same problems arise, or different ones will pop up.
(By the way, this doesn’t surprise me. This is the fourth time you’ve had to define a term within a definition you gave in order to avoid being wrong. It doesn’t mean you changed that “subdefinition”. But genuine insights about the world don’t look this contorted, where you have to keep saying, “No, I really meant this when I was saying what I meant by that.”)
The only error is to refuse to “cash out” the meaning of “arithmetic” into well-defined >predictions, but instead keep it boxed up into one ambiguous term,
Silas: This is really quite frustrating. I keep telling you exactly what I mean by arithmetic (the standard model of the natural numbers); I keep using the word to mean this and only this, and you keep claiming that my use of the word is either ambiguous or inconsistent. It makes it hard to imagine that you’re actually reading before you’re responding, and it makes it very difficult to carry on a dialogue. So for that reason, I think I’ll stop here.
When I saw this in the comment feed, I thought “Wow, Steve Landsburg on Less Wrong!” Then I saw that he was basically just arguing with one person.
While I think you’re not correct in this debate, I hope you’ll continue to post here. Your books have been a source of much entertainment and joy for me.
Bo102010: Thanks for the kind words. I’m not sure what the community standards are here, but I hope its not inappopriate to mention that I post to my own blog almost every weekday, and of course I’ll be glad to have you visit.
I can second that. Though, for a lack of education, I cannot tell who’s right in this debate, I don’t think anybody is for that it is just pure metaphysical musing about the nature of reality. But so far I really enjoyed reading your book. I also hope you’ll participate in other discussions here at lesswrong.com. It’s my favorite place.
Sorry for possible bad publicity, I committed the mistake to quick-share something which I’ve just read and found intriguing. Without the ability to portray it adequately. Especially on this forum which is rather about rationality as practical tool to attain your goals and not pure philosophy detached from evidence and prediction.
I also subscribed to your blog.
P.S.
Send you a message, you can find it in your inbox.
Are you reading my replies? Saying that arithmetic is “the standard model of the natural numbers” does not
“cash out” the meaning of “arithmetic” into well-defined predictions
For one thing, it doesn’t give me predictions (i.e. constraints on expectations) that we check to see who’s right.
For another, it’s not well-defined—it doesn’t tell me how I would know (as is necessary for the area of dispute) if arithmetic “exists” at this or that time. (And, of course, as you found out, it requires further specification of what counts as a model...)
(ETA: See Eliezer_Yudkowsky’s great posts on how to dissolve a question and get beyond there being One Right Answer to e.g. the vague question about a tree falling in the forest when no one’s around.)
So if you don’t see how that doesn’t count as cashing out the term and identifying the real disagreement, then I agree further discussion is pointless.
But truth be told, you’re not going to “stop there”. You going to continue on, promoting your “deep” insights, wherever you can, to people who don’t know any better, instead of doing the real epistemic labor achieving insights on the world.
First-order logic can’t distinguish between different sizes of infinity. Any finite or countable set of first-order statements with an infinite model has models of all sizes.
However, if you take second-order logic at face value, it’s actually quite easy to uniquely specify the integers up to isomorphism. The price of this is that second-order logic is not complete—the full set of semantic implications, the theorems which follow, can’t be derived by any finite set of syntactic rules.
So if you can use second-order statements—and if you can’t, it’s not clear how we can possibly talk about the integers—then the structure of integers, the subject matter of integers, can be compactly singled out by a small set of finite axioms. However, the implications of these axioms cannot all be printed out by any finite Turing machine.
Appropriately defined, you could state this as “finitely complex premises can yield infinitely complex conclusions” provided that the finite complexity of the premises is measured by the size of the Turing machine which prints out the axioms, yielding is defined as semantic implication (that which is true in all models of which the axioms are true), and the infinite complexity of the conclusions is defined by the nonexistence of any finite Turing machine which prints them all.
However this is not at all the sort of thing that Dawkins is talking about when he talks about evolution starting simple and yielding complexity. That’s a different sense of complexity and a different sense of yielding.
I misinterpreted this: “we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.”
Eliezer: There are an infinite number of truths of euclidean geometry. How could our finite brains know them all?
This was not meant to be a profound observation; it was meant to correct Silas, who seemed to think that I was reading some deep significance into our inability to know all the truths of arithmetic. My point was that there are lots of things we can’t know all the truths about, and this was therefore not the feature of arithmetic I was pointing to.
A decision procedure is a finite specification of all truths of euclidean geometry; I can use that finite fact anywhere I could use any truth of geometry. I suppose there is a difference, but even so, it’s the wrong thing to say in a Godelian discussion.
it was meant to correct Silas, who seemed to think that I was reading some deep significance into our inability to know all the truths of arithmetic. My point was that there are lots of things we can’t know all the truths about, and this was therefore not the feature of arithmetic I was pointing to.
Yes, it was. When I and several others pointed out that arithmetic isn’t actually complex, you responded by saying that it is infinitely complex, because it can’t be finitely described, because to do so … you’d have to know all the truths.
Am I misreading that response? If so, how do you reconcile arithmetic’s infinite complexity with the fact that scientists in fact use it to compress discriptions of the world? An infinitely complex entity can’t help to compress your descriptions.
Ariithmetic is complex because it can not be captured in a small set of axioms.
What is this “it”? There are some who claim that when we think about arithmetic, we are thinking about a specific model of the usual axioms for arithmetic, which appears to be your view here. Every statement of arithmetic is either true or false in that model. But what reason is there to make this claim? We cannot directly intuit the truth of arithmetical statements, or mathematicians would not have to spend so much effort on proving theorems. We may observe that we have a belief that we are indeed thinking about a definite model of the axioms, but why should we believe that belief?
To say that we intuit a thing is no more than to say we believe it but do not know why.
As far as I understand it, the claim is that both camps are asking wrong or useless questions. Reality is inherently complex and logical possible. To ask for why complexity is generally there is asking for rainbows end. But I’ve only arrived on page 34 of his book today so...
Anyway, I’ve to get some sleep soon. Will come back to it tomorrow. Thanks.
Landsburg does not doubt biological evolution. It’s just an argument about complexity being inherent in the laws of nature, reality. And what it has to do with rationality, it’s thought provoking. And rationality is a means to an end in succeeding to reach your goals. If your goal is to fathom the nature of reality these thoughts are valid as they add to the pile of possibilities being worthy of consideration in this regard.
His thoughts on that are confused too. He claims that math is fundamental to physics, but also that it’s infinitely complex. That doesn’t work:
1) Math is simple in the sense that you need very little space to specify the entities needed to use it.
2) But Landsburg says it’s complex because you haven’t really specified it until you know every mathematical truth.
3) But then physics isn’t using math by that definition! It’s using a tiny, computable, non-complex subset of that.
(This is discussed at length in the links I gave.)
Thought-provoking is good, but don’t fall for the trap of worshipping someone for saying stuff that doesn’t make sense.
I’m not sure “thought-provoking” is actually a good thing any more than “reflectively coherent” is a good thing. “Thought-provoking” is just a promise of future benefit from understanding; that promise is often broken.
Then what are Dawkins and his opponents “equally wrong” about? What does it mean to say that complexity is “inherent in the laws of nature”? Or that it isn’t? What does Landsburg mean by “complexity”? Is arithmetic “complex” because it contains deep truths, or is it “simple” because it can be captured in a small set of axioms?
I have yet to understand what is being claimed here.
RichardKennaway:
Ariithmetic is complex because it can not be captured in a small set of axioms. More precisely, it cannot be specified by any (small or large) set of axioms, because any set of (true) axioms about arithmetic applies equally well to other structures that are not arithmetic. Your favorite set of axioms fails to specify arithmetic in the same way that the statement “bricks are rectangular” fails to specify bricks; there are lots of other things that are also rectangular.
This is not true, for example, of euclidean geometry, which can be specified by a set of axioms.
Silas Barta’s remarks notwithstanding, the question of which truths we can know has nothing to do with this; we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.
Here we go again.
Then the universe doesn’t use that arithmetic in implementing physics, and it doesn’t have the significance you claim it does. Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms. And like I said in our many exchanges, it’s true that modern computers can’t answer every question about the natural numbers, but they don’t need to. Neither does the universe.
Yes, but you only need finite space to specify bricks well enough to get the desired functionality of bricks. Your argument would imply that bricks are infinitely complex because we don’t have a finite procedure for determining where an arbitrary object “really” is a brick, because of e.g. all the borderline cases. (“Do the stones in a stone wall count as bricks?”)
Then the universe doesn’t use that arithmetic in implementing physics,
How do you know?
Like I said just above, it uses the kind of arithmetic that can be captured in a small set of axioms.
What kind of arithmetic is that? It would have to be a kind of arithmetic to which Godel’s and Tarski’s theorems don’t apply, so it must be very different indeed from any arithmetic I’ve ever heard of.
Mainly from the computability of the laws of physics.
Right—meaning the universe doesn’t use arithmetic (as you’ve defined it). You’re getting tripped up on the symbol “arithmetic”, for which you keep shifting meanings. Just focus on the substance of what you mean by arithmetic: Does the universe need that to work? No, it does not. Do computers need to completely specify that arithmetic to work? No, they do not.
By the way:
1) To quote someone here, use the greater-than symbol before the quoted paragraph, as described in the help link below the entry field for a comment.
2) One should be cautious about modding down someone one is a direct argument with, as that tends to compromise one’s judgment. I have not voted you down, though if I were a bystander to this, I would.
Silas:
First—I have never shifted meanings on the definition of arithmetic. Arithmetic means the standard model of the natural numbers. I believe I’ve been quite consistent about this.
Second—as I’ve said many times, I believe that the most plausible candidates for the “fabric of the Universe” are mathematical structures like arithmetic. And as I’ve said many times, obviously I can’t prove this. The best I can do is explain why I find it so plausible, which I’ve tried to do in my book. If those arguments don’t move you, well, so be it. I’ve never claimed they were definitive.
Third—you seem to think (unless I’ve misread you) that this vision of the Universe is crucial to my point about Dawkins. It’s not.
Fourth—Here is my point about Dawkins; it would be helpful to know which part(s) you consider the locus of our disagreement:
a) the natural numbers—whether or not you buy my vision of them as the basis of reality—are highly complex by any reasonable definition (I am talking here about the actual standard model of the natural numbers, not some axiomatic system that partly describes them);
b) Dawkins has said, repeatedly, that all complexity—not just physical complexity, not just biological complexity, but all complexity—must evolve from something simpler. And indeed, his argument needs this statement in all its generality, because his argument makes no special assumption that would restrict us to physics or biology. It’s an argument about the nature of complexity itself.
c) Therefore, if we buy Dawkins’s argument, we must conclude that the natural numbers evolved from something simpler.
d) The natural numbers did not evolve from something simpler. Therefore Dawkins’s argument can’t be right.
It seems to me that the definition of complexity is the root of any disagreement here. It seems obvious to me that the natural numbers are not complex in the sense that a human being is complex. I don’t understand what kind of complexity you could be talking about that places natural numbers on an equivalent footing with, say, the entire ecosystem of the planet Earth.
Contrary to what SteveLandsburg says in his reply, I think you are exactly right. And this is how our disagreement originally started, by me explaining why he’s wrong about complexity.
Scientists use math to compress our description of the universe. It wouldn’t make much sense to use something infinitely complex for data compression!
So, to the extent he’s talking about math or arithmetic in a way that does have such complexity, he’s talking about something that isn’t particularly relevant to our universe.
I think the system of natural numbers is pretty damn complex. But the system of natural numbers is an abstract object and Dawkins likely never meant for his argument to apply to abstract objects, thinks all abstract objects are constructed by intelligences or denies the existence of abstract objects.
I think there is a good chance all abstract objects are constructed and a better chance that the system of natural numbers was constructed (or at least the system, when construed as an object and not a structural analog, is constructed and not discovered. That is numbers are more like adjectives then nouns, adjectives aren’t objects.)
mattnewport: This would seem to put you in the opposite corner from Silas, who thinks (if I read him correctly) that all of physical reality is computably describable, and hence far simpler than arithmetic (in the sense of being describable using only a small and relatively simple fragment of arithmetic).
Be that as it may, I’ve blogged quite a bit about the nature of the complexity of arithmetic (see an old post called “Non-Simple Arithmetic” on my blog). In brief: a) no set of axioms suffices to specify the standard model of arithmetic (i.e. to distinguish it from other models). And b) we have the subjective reports of mathematicians about the complexity of their subject matter, which I think should be given at least as much weight as the subjective reports of ecologists. (There are a c), d) and e) as well, but in this short comment, I’ll rest my case here.)
Your biggest problem here, and in your blog posts, is that you equivocate between the structure of the standard natural numbers (N) and the theory of that structure (T(N), also known as True Arithmetic). The former is recursive and (a reasonable encoding of) it has pretty low Kolmogorov complexity. The latter is wildly nonrecursive and has infinite K-complexity. (See almost any of Chaitin’s work on algorithmic information theory, especially the Omega papers, for definitions of the K-complexity of a formal system.)
The difference between these two structures comes from the process of translating between them. Once explained properly, it’s almost intuitive to a recursion theorist, or a computer scientist versed in logic, that there’s a computable reduction from any language in the Arithmetic Hierarchy to the language of true statements of True Arithmetic. This implies that going from a description of N to a truth-enumerator or decision procedure for T(N) requires a hypercomputer with an infinite tower of halting, meta-halting, … meta^n-halting … oracles.
However, it so happens that simulating the physical world (or rather, our best physical ‘theories’, which in a mathematical sense are structures, not theories) on a Turing machine does not actually require T(N), only N. We only use theories, as opposed to models, of arithmetic, when we go to actually reason from our description of physics to consequences. And any such reasoning we actually do, just like any pure mathematical reasoning we do, depends only on a finite-complexity fragment of T(N).
Now, how does this make biology more complex than arithmetic? Well, to simulate any biological creature, you need N plus a bunch of biological information, which together has more K-complexity than just N. To REASON about the biological creature, at any particular level of enlightenment, requires some finite fragment of T(N), plus that extra biological information. To enumerate all true statements about the creature (including deeply-alternating quantified statements about its counterfactual behaviour in every possible circumstance), you require the infinite information in T(N), plus, again, that extra biological information. (In the last case it’s of course rather problematic to say there’s more complexity there, but there’s certainly at least as much.)
Note that I didn’t know all this this morning until I read your blog argument with Silas and Snorri; I thank all three of you for a discussion that greatly clarified my grasp on the levels of abstraction in play here.
(This morning I would have argued strongly against your Platonism as well; tonight I’m not so sure...)
Splat: Thanks for this; it’s enlightening and useful.
The part I’m not convinced of this:
A squirrel is a finite structure; it can be specified by a sequence of A’s, C’s, G’s and T’s, plus some rules for protein synthesis and a finite number of other facts about chemistry. (Or if you think that leaves something out, it can be described by the interactions among a large but finite collection of atoms.) So I don’t see where we need all of N to simulate a squirrel.
Well, if you need to simulate a squirrel for just a little while, and not for unbounded lengths of time, a substructure of N (without closure under the operations) or a structure with a considerable amount of sharing with N (like 64-bit integers on a computer) could suffice for your simulation.
The problem you encounter here is that these substructures and near-substructures, once they reach a certain size, actually require more information to specify than N itself. (How large this size is depends on which abstract computer you used to define your instance of K-complexity, but the asymptotic trend is unavoidable.)
If this seems paradoxical, consider that after a while the shortest computer program for generating an open initial segment of N is a computer program for generating all of N plus instructions indicating when to stop.
Either way, it so happens that the biological information you’d need to simulate the squirrel dwarfs N in complexity, so even if you can find a sufficient substitute for N that’s “lightweight” you can’t possibly save enough to make your squirrel simulation less complex than N.
Splat:
1)
This depends on what you mean by “specify”. To distinguish N from other mathematical structures requires either an infinite (indeed non-recursive) amount of information or a second order specification including some phrase like “all predicates”. Are you referring to the latter? Or to something else I don’t know about?
2) I do not know Chaitin’s definition of the K-complexity of a structure. I’ll try tracking it down, though if it’s easy for you to post a quick definition, I’ll be grateful. (I do think I know how to define the K-complexity of a theory.) I presume that if I knew this, I’d know your answer to question 1).
3) Whatever the definition, the question remains whether K-complexity is the right concept here. Dawkins’s argument does not define complexity; he treats it as “we know it when we see it”. My assertion has been that Dawkins’s argument applies in a context where it leads to an incorrect conclusion, and therefore can’t be right. To make this argument, I need to use Dawkins’s intended notion of complexity, which might not be the same as Chaitin’s or Kolmogorov’s. And for this, the best I can do is to infer from context what Dawkins does and does not see as complex. (It is, clear from context that he sees complexity as a general phenomenon, not just a biological one.)
4) The natural numbers are certainly an extremely complex structure in the everyday sense of the word; after thousands of years of study, people are learning new and surprising things about them every day, and there is no expectation that we’ve even scratched the surface. This is, of course, a manifestation of the “wildly nonrecursive” nature of T(N), all of which is reflected in N itself. And this, again, seems pretty close to the way Dawkins uses the word.
5) I continue to be most grateful for your input. I see that SIlas is back to insisting that you can’t simulate a squirrel with a simple list of axioms, after having been told forty eight bajillion times (here and elsewhere) that nobody’s asserting any such thing; my claim is that you can simulate a squirrel in the structure N, not in any particular axiomatic system. Whether or not you agree, it’s a pleasure to engage with someone who’s not obsessed with pummelling straw men.
Replying out of order:
2) A quick search of Google Scholar didn’t net me a Chaitin definition of K-complexity for a structure. This doesn’t surprise me much, as his uses of AIT in logic are much more oriented toward proof theory than model theory. Over here you can see some of the basic definitions. If you read page 7-10 and then my explanation to Silas here you can figure out what the K-complexity of a structure means. There’s also a definition of algorithmic complexity of a theory in section 3 of the Chaitin.
According to these definitions, the complexity of N is about a few hundred bits for reasonable choices of machine, and the complexity of T(N) is &infty;.
1) It actually is pretty hard to characterize N extrinsically/intensionally; to characterize it with first-order statements takes infinite information (as above). The second-order characterization. by contrast, is a little hard to interpret. It takes a finite amount of information to pin down the model[*][PA2], but the second-order theory PA2 still has infinite K-complexity because of its lack of complete rules of inference.
Intrinsic/extensional characterizations, on the other hand, are simple to do, as referenced above. Really, Gödel Incompleteness wouldn’t be all that shocking in the first place if we couldn’t specify N any other way than its first-order theory! Interesting, yes, shocking, no. The real scandal of incompleteness is that you can so simply come up with a procedure for listing all the ground (quantifier-free) truths of arithmetic and yet passing either to or from the kind of generalizations that mathematicians would like to make is fraught with literally infinite peril.
3&4) Actually I don’t think that Dawkins is talking about K-complexity, exactly. If that’s all you’re talking about, after all, an equal-weight puddle of boiling water has more K-complexity than a squirrel does. I think there’s a more involved, composite notion at work that builds on K-complexity and which has so far resisted full formalization. Something like this, I’d venture.
The complexity of the natural numbers as a subject of mathematical study, while certainly well-attested, seems to be of a different sense than either K-complexity or the above. Further, it’s unclear whether we should really be placing the onus of this complexity on N, on the semantics of quantification in infinite models (which N just happens to bring out), or on the properties of computation in general. In the latter case, some would say the root of the complexity lies in physics.
Also, I very much doubt that he had in mind mathematical structures as things that “exist”. Whether it turns out that the difference in the way we experience abstractions like the natural numbers and concrete physical objects like squirrels is fundamental, as many would have it, or merely a matter of our perspective from within our singular mathematical context, as you among others suspect, it’s clear that there is some perceptible difference involved. It doesn’t seem entirely fair to press the point this much without acknowledging the unresolved difference in ontology as the main point of conflict.
Trying to quantify which thing is more complex is really kind of a sideshow, although an interesting one. If one forces both senses of complexity into the K-complexity box then Dawkins “wins”, at the expense of both of your being turned into straw men. If one goes by what you both really mean, though, I think the complexity is probably incommensurable (no common definition or scale) and the comparison is off-point.
5) Thank you. I hope the discussion here continues to grow more constructive and helpful for all involved.
Relevant link: http://lesswrong.com/lw/vh/complexity_and_intelligence/
Splat:
Thanks again for bringing insight and sanity to this discussion. A few points:
1) Your description of the structure N presupposes some knowledge of the structure N; the program that prints out the structure needs a first statement, a second statement, etc. This is, of course, unavoidable, and it’s therefore not a complaint; I doubt that there’s any way to give a formal description of the natural numbers without presupposing some informal understanding of the natural numbers. But what it does mean, I think, is that K-complexity (in the sense that you’re using it) is surely the wrong measure of complexity here—because when you say that N has low K-complexity, what you’re really saying is that “N is easy to describe provided you already know something about N”. What we really want to know is how much complexity is imbedded in that prior knowledge.
1A) On the other hand, I’m not clear on how much of the structure of N is necessarily assumed in any formal description, so my point 1) might be weaker than I’ve made it out to be.
2) It has been my position all along that K-complexity is largely a red herring here in the sense that it need not capture Dawkins’s meaning. Your observation that a pot of boiling water is more K-complex than a squirrel speaks directly to this point, and I will probably steal it for use in future discussions.
3) When you talk about T(N), I presume you mean the language of Peano arithmetic, together with the set of all true statements in that language. (Correct me if I’m wrong.) I would hesitate to call this a theory, because it’s not recursively axiomatizable, but that’s a quibble. In any event, we do know what we mean by T(N), but we don’t know what we mean by T(squirrel) until we specify a language for talking about squirrels—a set of constant symbols corresponding to tail, head, etc., or one for each atom, or....., and various relations, etc. So T(N) is well defined, while T(squirrel) is not. But whatever language you settle on, a squirrel is still going to be a finite structure, so T(squirrel) is not going to share the “wild nonrecursiveness” of T(N) (which is closely related to the difficulty of giving an extrinsic characterization). That seems to me to capture a large part of the intuition that the natural numbers are more complex than a squirrel,
4) You are probably right that Dawkins wasn’t thinking about mathematical structures when he made his argument. But because he does claim that his argument applies to complexity in general, not just to specific instances, he’s stuck (I think) either accepting applications he hadn’t thought about or backing off the generality of his claim. It’s of course hard to know exactly what he meant by complexity, but it’s hard for me to imagine any possible meaning consistent with Dawkins’s usage that doesn’t make arithmetic (literally) infinitely more complex than a squirrel.
5) Thanks for trying to explain to Silas that he doesn’t understand the difference between a structure and an axiomatic system. I’ve tried explaining it to him in many ways, at many times, in many forums, but have failed to make any headway. Maybe you’ll have better luck.
6) If any of this seems wrong to you, I’ll be glad to be set straight.
1) Unless they say otherwise, you should assume someone is using the standard meanings for the terms they use, which would mean Dawkins is using the intuitive definition, which closely parallels K-complexity.
2) If you’re going to write a book hundreds of pages long in which you crucially rely on the concept of complexity, you need to explicitly to define it. That’s just how it works. If you know what concept of complexity is “the” right one here, you need to spell it out yourself.
3) Most importantly, you have shown Dawkins’s argument to be in error in the context of an immaterial realm that is not observable and does not interact with this universe. Surely, you can think of some reason why Dawkins doesn’t intend to refer to such realms, can’t you? (Hint: Dawkins is an atheist, materialist, and naturalist—just like you, in other words, until it comes to the issue of math.)
ETA: If any followers of this exchange think I’m somehow not getting someting, or being unfair to SteveLandsburg, please let me know, either as a reply in the thread or a PM, whether or not you use your normal handle.
Well, Silas, what I actually did was write a book 255 pages long of which this whole Dawkins/complexity thing occupies about five pages (29-34) and where complexity is touched on exactly once more, in a brief passage on pages 7-8. From the discrepancy between your description and reality, I infer that you haven’t read the book, which would help to explain why your comments are so bizarrely misdirected.
Oh, and I see that you’re still going on about axiomatic descriptions of squirrels, as if that were relevant to something I’d said. (Hint: A simulation is not an axiomatic system. That’s 48 bajillion and one.)
I have not read the entire book. I have read many long portions of it, mostly the philosophical ones and those dealing with physics. I was drawn to on the assumption that, surely you would have defined complexity in your exposition!
It’s misleading to say that your usage of complexity only takes 8 pages, so it’s insignificant. Rather, the point you make about complexity is your grounding for broader claims about the role mathematics plays in the universe, which you come back to frequently. The explicit mention of the term “complexity” is thus a poor measure of how much you rely on it.
But even if it were just 8 pages, you should still have defined it, and you should still not expect to have achieved insights on the topic, given that you haven’t defined it.
(I certainly wouldn’t want to buy it—why should I subsidize such confused thinking? I don’t even like your defenses of libertarianism, despite being libertarian.)
Ah, another suddenly-crucial distinction to make, so you can wiggle out of being wrong!
I should probably use this opportunity to both show I did read many portions, and show why Landsburg doesn’t get what it means to really explain something. His explanation of the Heisenberg Uncertainty Principle (which gets widely praised as a good explanation for some reason) is this: think of an electron as moving in a circle within a square. If you measure its vertical position, its closeness to the top determines the chance of getting a “top” or “bottom” reading.
Likewise the horizontal direction: if you measure the horizontal position of the electron, your chances of getting a “left” or “right” reading depends on how far it is from that side.
And for the important part: why can’t you measure both at the same time? Landsburg’s brilliant explanation: um, because you can’t.
But that’s what the explanation was supposed to demystify in the first place! You can’t demystify by feeding that very mystery a blackbox fact unto itself. To explain it, you would need to explain enough of the dynamics of quantum systems so that, at the end, your reader doesn’t view precise measurement of both position and momentum as even being coherent! Saying, “oh, you can’t because you can’t” isn’t an explanation.
I didn’t say that. Read it again. I said that there is some finite axiom list that can describe squirrels, but it’s not just the axioms that suffice to let you use arithmetic. It’s those, plus biological information about squirrels. But this arithmetic is not the infinitely complex arithmetic you talk about in other contexts!
You can’t—you need axioms beyond those that specify N. The fact that the biological model involving those axioms uses math, doesn’t mean you’ve described it once you’ve described the structure N. So whether or not you call that “simulating it in the structure N”, it’s certainly more complex than just N.
I’m responding here to your invitation in the parent, since this post provides some good examples of what you’re not getting.
Simulating squirrels and using arithmetic require information, but that information is not supplied in the form of axioms. The best way to imagine this in the case of arithmetic is in terms of a structure.
Starting from the definition in that wikipedia page, you can imagine giving the graphs of the universe and functions and relations as Datalog terms. (Using terms instead of tuples keeps the graphs disjoint, which will be important later.) Like so:
Universe:
is_number(0)
,is_number(1)
, …0:
zero(0)
S:
next(0,1)
,next(1,2)
, …+:
add_up_to(0,0,0)
,add_up_to(0,1,1)
,add_up_to(1,0,1)
…and so on.
Then you use some simple recursive coding of datalog terms as binary. What you’re left with is just a big (infinite) set of binary strings. The Kolmogorov complexity of the structure N, then (the thing you need to use arithmetic) is the size of the shortest program that enumerates the set, which is actually very small.
Note that this is actually the same arithmetic that Steve is talking about! It is just a different level of description, one that is much simpler but entirely sufficient to conduct simulations with. It is only in understanding the long-term behavior of simulations without running them that one requires any of the extra complexity embodied in T(N) (the theory). To actually run them you just need N (the structure).
The fact that you don’t seem to understand this point yet leads me to believe you were being a little unfair when you said:
Now, if you want to complete the comparison, imagine you’re creating a structure that includes a universe with squirrel-states and times, and a function from time to squirrel state. This would look something like:
is_time(1:00:00)
,is_time(1:00:01)
, …is_squirrel_state(<eating nut>)
,is_squirrel_state(<rippling tail>)
,is_squirrel_state(<road pizza>)
squirrel_does(1:00:00, <rippling tail>)
, …The squirrel states, though, will not be described by a couple of words like that, but by incredibly detailed descriptions of the squirrel’s internal state—what shape all its cells are, where all the mRNAs are on their way to the ribosomes, etc. The structure you come up with will take a much bigger program to enumerate than N will. (And I know you already agree with the conclusion here, but making the correct parallel matters.)
(Edit: fixed markup.)
I wasn’t careful to distinguish axioms from other kinds of information in the model, and I think it’s a distraction to do so because it’s just an issue of labels (which as you probably saw from the discussion is a major source of confusion). My focus was on tabulating the total complexity of whatever-is-being-claimed-is-significant. For that, you only need to count up how much information goes into your “message” describing the data (in the “Minimum Message Length criterion” sense of “message”). Anything in such a message can be described without loss of generality as an axiom.
If I want to describe squirrels, I will find, like most scientists find, that the job is much easier of I can express things using arithmetic. Arithmetic is so helpful that, even after accounting for the cost of telling you how to use it (the axioms-or-whatever of math), I still save in total message length. Whether you call the squirrel info I gathered from nature, or the specification of math, the “axioms” doesn’t matter.
But it’s not the same arithmetic SteveLandsburg is talking about, if you follow through to the implications he claims fall out from it. He claims arithmetic—the infinitely complex one—runs the universe. It doesn’t. The universe only requires the short message specifying N, plus the (finite) particulars of the universe. Whatever infinitely-complex thing he’s talking about from a “different level of description” isn’t the same thing, and can’t be the same thing.
What’s more, the universe can’t contain that thing because there is no (computable) isomorphism between it and the universe. As we derive the results of longer and longer chains of reasoning, our universe starts to contain more and more complex pieces of that thing, but it still wouldn’t be somehow fundamental to the universe’s operation—not if we’re just now getting to contain pieces of it.
I’m sorry, I don’t see how that contradicts what I said or shows a different parallel. Now, I certainly didn’t use the N vs. T(N) terminology you did, but I clearly explained how there have to be two separate “arithmetics” in play here, as best summarized in my comment here. Whatever infinitely complex arithmetic SteveLandsburg is talking about, isn’t the one that runs the universe. The insights on one don’t apply to the other.
Okay, pretend I’ve given you the axioms sufficient for you to +-*/. Can simulate squirrels now? Of course not. You still have to go out and collect information about squirrels and add it to your description of the axioms of arithmetic (which suffice for all of N) to have a description of squirrels.
You claim that because you can simulate squirrels with (a part of) N, then N suffices to simulate squirrels. But this is like saying that, because you know the encoding method your friend uses to send you messages, you must know the content of all future messages.
That’s wrong, because those are different parts of the compressed data: one part tells you how to decompress, another tells you what you’re decompressing. Knowing how to decompress (i.e., the axioms of N) is different from knowing the string to be decompressed by that method (i.e. the arithmetic symbols encoding squirrels).
By the way, I really hope your remark about Splat’s comment being “enlightening” was just politeness, and that you didn’t actually mean it. Because if you did, that would mean you’re just now learning the distinction between N and T(N), the equivocation between which undermines your claims about arithmetic’s relation to the universe.
And much of his comment was a restatement of my point about the difference between the complex arithmetic you refer to, and the arithmetic the universe actually runs on. (I’m not holding my breath for a retraction or a mea culpa or anything, just letting people know what they’re up against here.)
Because remember—nothing is more important to SilasBarta than politeness!
Touche :-P
Again, this word complexity is used in many ways. Complexity in the sense of humans find this complicated is a different concept from complexity in the sense of Kolmogorov complexity.
Don’t worry guys, I didn’t let you down. I addressed the issue from the perspective of Kolmogorv complexity in my first blog response. Landsburg initially replied with (I’m paraphrasing), “so what if you became an expert on information theory? That’s not the only meaning of complexity.”
Only later did he try to claim that he also meets the Kolmogorov definition.
(And FWIW, I’m not an expert on information theory—it’s just a hobby. I guess my knowledge just looked impressive to someone...)
Then what do you mean when you say “integers”^H^H “natural numbers”, if no set of premises suffices to talk about it as opposed to something else?
Anyway, no countable set of first-order axioms works. But a finite set of second-order axioms work. So to talk about the natural numbers, it suffices merely to think that when you say “Any predicate that is true of zero, and is true of the successor of every number it is true of, is true of all natural numbers” you made sense when you said “any predicate”.
It is this sort of minor-seeming yet important technical inaccuracy that separates “The Big Questions” from “Good and Real”, I’m afraid.
Natural numbers, rather. (Minor typo.)
I think that you have to be careful about claims that second-order logic fixes a unique model. Granted, you can derive the statement “There exists a unique model of the axioms of arithmetic.”
But, for example, what in reality does your “any predicate” quantifier range over? If, under interpretation, it ranges over subsets of the domain of discourse, well, what exactly constitutes a subset? This presumes that you have a model of some set theory in hand. How do you specify which model of set theory you’re using? So far as I know, there’s no way out of this regress.
[ETA: I’m not a logician. I’m definitely open to correction here.]
[ETA2: And now that I read more carefully, you were acknowledging this point when you wrote, “it suffices merely to think that . . . you made sense when you said ‘any predicate’.”
However, you didn’t acknowledge this issue in your earlier comment. I think that it’s too significant an issue to be dismissed with an “it suffices merely...”. When an infinite regress threatens, it doesn’t suffice to push the issue back a level and say “it suffices merely to show that that’s the last level.”]
Sure, and that’s the age-old argument for why we should not take second-order logic at face value. But in this case we cannot go around blithely talking about the integers for there is no language we could use to speak of them, or any other infinite set. We would be forbidden of saying that there is something we cannot talk about, and this is not surprising—what is it you can’t refer to?
I’m not familiar with the literature of this argument. (It was probably clear from the tentativeness of my comment that I was thinking my own murky way through this issue.)
You seem to take it as the default that we should take second-order logic at face value. (Now that I know what you mean by “face value”, I see that you did acknowledge this issue in your earlier comment.) But I should think that the default would be to be skeptical about this. Why expect that we have a canonical model when we talk about sets or predicates if we’re entertaining skepticism that we have a canonical model for integer-talk?
We don’t. Skepticism of sets, predicates, and canonical integers are all the same position in the debate.
And so is skepticism of canonical Turing machines, as far as I can tell. Specifically, skepticism that there is always a fact of the matter as to whether a given TM halts.
I think you might be able to make the skeptical position precise by constructing nonstandard variants of TMs where the time steps and tape squares are numbered with nonstandard naturals, and the number of symbols and states are also nonstandard, and you would be able to relate these back to the nonstandard models that produced them by using a recursive description of N to regenerate the nonstandard model of the natural numbers you started with. This would show that there are nonstandard variants of computability that all believe in different ‘standard’, ‘minimal’ models of arithmetic, and are unaware of the existence of smaller models, and thus presumably of the ‘weaker’ (because they halt less often) notions of Turing Machines.
Now, I’m not yet sure if this construction goes through as I described it; for me, if it does it weighs against the existence of a ‘true’ Standard Model and if it doesn’t it weighs in favor.
I’m not sure, but I think it’s impossible to construct a computable nonstandard model of the integers (one where you can implement operations like +).
It is in fact provably impossible to construct a computable nonstandard model (where, say, S and +, or S and × are both computable relations) in a standard model of computation. What I was referring to was a nonstandard model that was computable according to an equally nonstandard definition of computation, one that makes explicit the definitional dependence of Turing Machines on the standard natural numbers and replaces them with nonstandard ones.
The question I’m wondering about is whether such a definition leads to a sensible theory of computation (at least on its own terms) or whether it turns out to just be nonsense. This may have been addressed in the literature but if so it’s beyond the level to which I’ve read so far.
Would you give a reference? I found it easy to find assertions such as “the completeness theorem is not constructively provable,” but this statement is a little stronger.
Tennenbaum’s Theorem
I believe that this claim is based on a defective notion of what it takes to refer to something successfully. The issue that we’re talking about here is a manifestation of that defect. I’m trying to work out a different conception of reference, but it’s very much a work in progress.
No, it wouldn’t—he’s saying basically the same thing I did. The laws of physics are computable. In describing observations, we use concepts from math. The reason we do so is that it allows simpler descriptions of the universe.
Right, I’ve explained before why your arguments are in error. We can talk more about that some other time.
No, I accept that they’re separate errors.
Okay:
If what you describe here is what you mean by both “the natural numbers” and “the actual standard model of the natural numbers”, then I will accept this definition for the purposes of argument, but that, using it consistently, it doesn’t have the properties you claim.
Disagree with this. Dawkins has been referring to existing complexity in the universe and the context of every related statement confirms this. But even accepting it, the rest of your argument still doesn’t follow.
Disagree. Again, let’s keep the same definition throughout. Recall what you said the natural numbers were:
The model arose from something simpler (like basic human cognition of counting of objects). The Map Is Not The Territory.
Ah, but now I know what you’re going to say: you meant the sort of Platonic-space model of those natural numbers, that exists independently of whatever’s in our universe, has always been complex.
So, if you assume (like theists) that there’s some sort of really-existing realm, outside of the universe, that always has been, and is complex, then you can prove that … there’s a complexity that has always existed. Which is circular.
Silas: I agree that if arithmetic is a human invention, then my counterexample goes away.
If I’ve read you correctly, you believe that arithmetic is a human invention, and therefore reject the counterexample.
On that reading, a key locus of our disagreement is whether arithmetic is a human invention. I think the answer is clearly no, for reasons I’ve written about so extensively that I’d rather not rehash them here.
I’m not sure, though, that I’ve read you correctly, because you occasionally say things like “The Map Is Not The Territory” which seems to presuppose some sort of platonic Territory. But maybe I just don’t understand what you meant by this phrase.
[Incidentally, it occurs to me that perhaps you are misreading my use of the word “model”. I am using this word in the technical sense that it’s used by logicians, not in any of its everyday senses.]
Map and territory
More: Map and Territory (sequence)
Then you agree that your “counterexample” amounts to an assumption. If a Platonic realm exists (in some appropriate sense), and if Dawkins was haphazardly including that sense in the universe he is talking about when he describes complexity arising, then he wrong that complexity always comes from simplicity.
If you assume Dawkins is wrong, he’s wrong. Was that supposed to be insightful?
It’s a false dispute, though. When you clarify the substance of what these terms mean, there are meanings for which we agree, and meanings for which we don’t. The only error is to refuse to “cash out” the meaning of “arithmetic” into well-defined predictions, but instead keep it boxed up into one ambiguous term, which you do here, and which you did for complexity. (And it’s kind of strange to speak for hundreds of pages about complexity, and then claim insights on it, without stating your definition anywhere.)
One way we’d agree, for example, is if we take your statements about the Platonic realm to be counterfactual claims about phenomena isomorphic to certain mathematic formalisms (as I said at the beginning of the thread).
The definitions aren’t incredibly different, which is why we have the same term for both of them. If you spell out that definition more explicitly, the same problems arise, or different ones will pop up.
(By the way, this doesn’t surprise me. This is the fourth time you’ve had to define a term within a definition you gave in order to avoid being wrong. It doesn’t mean you changed that “subdefinition”. But genuine insights about the world don’t look this contorted, where you have to keep saying, “No, I really meant this when I was saying what I meant by that.”)
Silas: This is really quite frustrating. I keep telling you exactly what I mean by arithmetic (the standard model of the natural numbers); I keep using the word to mean this and only this, and you keep claiming that my use of the word is either ambiguous or inconsistent. It makes it hard to imagine that you’re actually reading before you’re responding, and it makes it very difficult to carry on a dialogue. So for that reason, I think I’ll stop here.
When I saw this in the comment feed, I thought “Wow, Steve Landsburg on Less Wrong!” Then I saw that he was basically just arguing with one person.
While I think you’re not correct in this debate, I hope you’ll continue to post here. Your books have been a source of much entertainment and joy for me.
Bo102010: Thanks for the kind words. I’m not sure what the community standards are here, but I hope its not inappopriate to mention that I post to my own blog almost every weekday, and of course I’ll be glad to have you visit.
I can second that. Though, for a lack of education, I cannot tell who’s right in this debate, I don’t think anybody is for that it is just pure metaphysical musing about the nature of reality. But so far I really enjoyed reading your book. I also hope you’ll participate in other discussions here at lesswrong.com. It’s my favorite place.
Sorry for possible bad publicity, I committed the mistake to quick-share something which I’ve just read and found intriguing. Without the ability to portray it adequately. Especially on this forum which is rather about rationality as practical tool to attain your goals and not pure philosophy detached from evidence and prediction.
I also subscribed to your blog.
P.S. Send you a message, you can find it in your inbox.
Are you reading my replies? Saying that arithmetic is “the standard model of the natural numbers” does not
For one thing, it doesn’t give me predictions (i.e. constraints on expectations) that we check to see who’s right.
For another, it’s not well-defined—it doesn’t tell me how I would know (as is necessary for the area of dispute) if arithmetic “exists” at this or that time. (And, of course, as you found out, it requires further specification of what counts as a model...)
(ETA: See Eliezer_Yudkowsky’s great posts on how to dissolve a question and get beyond there being One Right Answer to e.g. the vague question about a tree falling in the forest when no one’s around.)
So if you don’t see how that doesn’t count as cashing out the term and identifying the real disagreement, then I agree further discussion is pointless.
But truth be told, you’re not going to “stop there”. You going to continue on, promoting your “deep” insights, wherever you can, to people who don’t know any better, instead of doing the real epistemic labor achieving insights on the world.
That doesn’t sound right. Can you point me to for example a Wikipedia page about this?
First-order logic can’t distinguish between different sizes of infinity. Any finite or countable set of first-order statements with an infinite model has models of all sizes.
However, if you take second-order logic at face value, it’s actually quite easy to uniquely specify the integers up to isomorphism. The price of this is that second-order logic is not complete—the full set of semantic implications, the theorems which follow, can’t be derived by any finite set of syntactic rules.
So if you can use second-order statements—and if you can’t, it’s not clear how we can possibly talk about the integers—then the structure of integers, the subject matter of integers, can be compactly singled out by a small set of finite axioms. However, the implications of these axioms cannot all be printed out by any finite Turing machine.
Appropriately defined, you could state this as “finitely complex premises can yield infinitely complex conclusions” provided that the finite complexity of the premises is measured by the size of the Turing machine which prints out the axioms, yielding is defined as semantic implication (that which is true in all models of which the axioms are true), and the infinite complexity of the conclusions is defined by the nonexistence of any finite Turing machine which prints them all.
However this is not at all the sort of thing that Dawkins is talking about when he talks about evolution starting simple and yielding complexity. That’s a different sense of complexity and a different sense of yielding.
That makes more sense, thanks.
Any recommended reading on this sort of thing?
Decidability of Euclidean geometry#Some_decidable_theories).
I don’t know where Landsburg gets the claim that we can know all the truths of arithmetic.
Richard Kennaway:
I don’t know where Landsburg gets the claim that we can know all the truths of arithmetic.
I don’t know where you got the idea that I’d ever make such a silly claim.
I misinterpreted this: “we can never know all the truths of euclidean geometry, but we can still specify euclidean geometry via a set of axioms. Not so for arithmetic.”
Richard: Gotcha. Sorry if it was unclear which part the “not so” referred to.
Note that Landsburg is thus also incorrect in saying “we can never know all the truths of euclidean geometry”.
Eliezer: There are an infinite number of truths of euclidean geometry. How could our finite brains know them all?
This was not meant to be a profound observation; it was meant to correct Silas, who seemed to think that I was reading some deep significance into our inability to know all the truths of arithmetic. My point was that there are lots of things we can’t know all the truths about, and this was therefore not the feature of arithmetic I was pointing to.
A decision procedure is a finite specification of all truths of euclidean geometry; I can use that finite fact anywhere I could use any truth of geometry. I suppose there is a difference, but even so, it’s the wrong thing to say in a Godelian discussion.
Yes, it was. When I and several others pointed out that arithmetic isn’t actually complex, you responded by saying that it is infinitely complex, because it can’t be finitely described, because to do so … you’d have to know all the truths.
Am I misreading that response? If so, how do you reconcile arithmetic’s infinite complexity with the fact that scientists in fact use it to compress discriptions of the world? An infinitely complex entity can’t help to compress your descriptions.
What is this “it”? There are some who claim that when we think about arithmetic, we are thinking about a specific model of the usual axioms for arithmetic, which appears to be your view here. Every statement of arithmetic is either true or false in that model. But what reason is there to make this claim? We cannot directly intuit the truth of arithmetical statements, or mathematicians would not have to spend so much effort on proving theorems. We may observe that we have a belief that we are indeed thinking about a definite model of the axioms, but why should we believe that belief?
To say that we intuit a thing is no more than to say we believe it but do not know why.
As far as I understand it, the claim is that both camps are asking wrong or useless questions. Reality is inherently complex and logical possible. To ask for why complexity is generally there is asking for rainbows end. But I’ve only arrived on page 34 of his book today so...
Anyway, I’ve to get some sleep soon. Will come back to it tomorrow. Thanks.