“But basically, the 1 bit/generation bound is information-theoretic; it applies, not just to any species, but to any self-reproducing organism, even one based on RNA or silicon. The specifics of how information is utilized, in our case DNA → mRNA → protein, don’t matter.”
OK, and I’m familiar with information theory (less so with evolutionary biology, but I understand the basics) but I’m thinking that the 1 bit/generation bound is—pardon the pun—a bit misleading, since:
A lot—I mean a lot—of crazy assumptions are made without any hard evidence to back them up. (E.g., the “mammals produce on average ~4 offspring, and when they produce more, it’s compensated for by selection’s inefficiencies.”)
I’m still not convinced that we’re measuring in the right units. Some mutations do absolutely nothing (for example, if a segment of DNA translating to a UAU codon mutated into one translating to UAC), and some make a ridiculously huge difference. This kind of redundancy, along with many other factors, makes me wonder if we need to change the 1 bit by some scaling factor...
“‘Life spends a lot of time in non-equilibrium states as well, and those are the states in which evolution can operate most quickly.’
Yes, but they must be balanced by states where it operates more slowly. You can certainly have a situation where 1.5 bits are added in odd years and .5 bits in even years, but it’s a wash: you still get 1 bit/year long term.”
This seems to contradict your earlier assertion that the 1 bit/generation rate is “an upper bound, not an average.” It seems to me to be more analogous to a roulette wheel or the Second Law of Thermodynamics (relax! I’m not about to make a creationist argument just ’cause I said that!), so a gene pool can certainly acquire more than 1.36 bits (or whatever the actual figure is) in some generations, but in the long run “the house always wins.”
“The factor due to redundant coding sequences is 1.36 (1.4 bits/base instead of 2.0). This does increase the amount of storable information, because it makes the degenerative pressure (mutation) work less efficiently. Then again, it’s only a factor of 35%, so the conclusion is still basically the same.”
Thank you. As long as everyone’s clear that the speed limit is O(1) bits/generation (over long stretches? on average?) and not necessarily precisely 1 bit no matter what, I’m happy.
Scott: “What you’re saying—correct me if I’m wrong—is that biological evolution never discovered [error-correcting codes]...[O]n top of that, we can’t have the error-correcting code be too good, since otherwise we’ll suppress beneficial mutations!”
Whoa—that’s really helpful. Scott, as usual, you’ve broken through all the troubling and confusing jargon (“equilibrium? durr...”) so that some poor schmuck like me can actually see the main point. Thanks. =)
But of course, evolution itself is a sort of crude error-correcting code—and one that discriminates between beneficial mutations and detrimental ones! So here’s my question: Can you actually do asymptotically better than natural natural selection by applying an error-correcting code that doesn’t hamper beneficial mutations? Or is natural selection (plus local error-correction of the form existing in DNA) optimal?