Eliezer, your argument seems to confuse two different senses of information. You first define “bit” as “the ability to eliminate half the possibilities”—in which case, yes, if every organism has O(1) children then the logical “speed limit on evolution” is O(1) bits per generation.
But you then conclude that “the meaningful DNA specifying a human must fit into at most 25 megabytes”—and more concretely, that “it is an excellent bet that nearly all the DNA which appears to be junk, really is junk.” I don’t think that follows at all.
The underlying question here seems to be this: suppose you’re writing a software application, and as you proceed, many bits of code are generated at random, many bits are logically determined by previous bits (albeit in a more-or-less “mindless” way), and at most K times you have the chance to fix a bit as you wish. (Bits can also be deleted as you go.) Should we then say that whatever application you end up with can have at most K bits of “meaningful information”?
Arguably from some God’s-eye view. But any mortal examining the code could see far more than K of the bits fulfilling a “functional role”—indeed, possibly even all of them. The reason is that the web of logical dependencies, by which the K “chosen” bits interacted with the random bits to produce the code we see, could in general be too complicated ever to work out within the lifetime of the universe. And crucially, when biologists talk about how many base pairs are “coding” and how many are “non-coding”, it’s clearly the pragmatic sense of “meaningful information” they have in mind rather than the Platonic one.
Indeed, it’s not even clear that God could produce a ~K-bit string from which the final application could be reliably reconstructed. The reason is that the application also depends on random bits, of which there are many more than K. Without assuming some conjecture about pseudorandom number generators, it seems the most God could do would be to give us a function mapping the random bits to K bits, such that by applying that function we’d end up most of the time with an application that did more-or-less the same thing. (This actually leads to some interesting CS questions, but I’ll spare you for now! :) )
To say something more concrete, without knowing much more than I do about biology, I wouldn’t venture a guess as to how much of the “junk DNA” is really junk. The analogy I prefer is the following: if I printed out the MS Word executable file, almost all of it would look like garbage to me, with only a few “coding regions” here and there (“It looks like you’re writing a letter. Would you like help?”). But while the remaining bits might indeed be garbage in some sense, they’re clearly not in the sense a biologist would mean.
Eliezer, your argument seems to confuse two different senses of information. You first define “bit” as “the ability to eliminate half the possibilities”—in which case, yes, if every organism has O(1) children then the logical “speed limit on evolution” is O(1) bits per generation.
But you then conclude that “the meaningful DNA specifying a human must fit into at most 25 megabytes”—and more concretely, that “it is an excellent bet that nearly all the DNA which appears to be junk, really is junk.” I don’t think that follows at all.
The underlying question here seems to be this: suppose you’re writing a software application, and as you proceed, many bits of code are generated at random, many bits are logically determined by previous bits (albeit in a more-or-less “mindless” way), and at most K times you have the chance to fix a bit as you wish. (Bits can also be deleted as you go.) Should we then say that whatever application you end up with can have at most K bits of “meaningful information”?
Arguably from some God’s-eye view. But any mortal examining the code could see far more than K of the bits fulfilling a “functional role”—indeed, possibly even all of them. The reason is that the web of logical dependencies, by which the K “chosen” bits interacted with the random bits to produce the code we see, could in general be too complicated ever to work out within the lifetime of the universe. And crucially, when biologists talk about how many base pairs are “coding” and how many are “non-coding”, it’s clearly the pragmatic sense of “meaningful information” they have in mind rather than the Platonic one.
Indeed, it’s not even clear that God could produce a ~K-bit string from which the final application could be reliably reconstructed. The reason is that the application also depends on random bits, of which there are many more than K. Without assuming some conjecture about pseudorandom number generators, it seems the most God could do would be to give us a function mapping the random bits to K bits, such that by applying that function we’d end up most of the time with an application that did more-or-less the same thing. (This actually leads to some interesting CS questions, but I’ll spare you for now! :) )
To say something more concrete, without knowing much more than I do about biology, I wouldn’t venture a guess as to how much of the “junk DNA” is really junk. The analogy I prefer is the following: if I printed out the MS Word executable file, almost all of it would look like garbage to me, with only a few “coding regions” here and there (“It looks like you’re writing a letter. Would you like help?”). But while the remaining bits might indeed be garbage in some sense, they’re clearly not in the sense a biologist would mean.