“‘Life spends a lot of time in non-equilibrium states as well, and those are the states in which evolution can operate most quickly.’
Yes, but they must be balanced by states where it operates more slowly. You can certainly have a situation where 1.5 bits are added in odd years and .5 bits in even years, but it’s a wash: you still get 1 bit/year long term.”
This seems to contradict your earlier assertion that the 1 bit/generation rate is “an upper bound, not an average.” It seems to me to be more analogous to a roulette wheel or the Second Law of Thermodynamics (relax! I’m not about to make a creationist argument just ’cause I said that!), so a gene pool can certainly acquire more than 1.36 bits (or whatever the actual figure is) in some generations, but in the long run “the house always wins.”
“The factor due to redundant coding sequences is 1.36 (1.4 bits/base instead of 2.0). This does increase the amount of storable information, because it makes the degenerative pressure (mutation) work less efficiently. Then again, it’s only a factor of 35%, so the conclusion is still basically the same.”
Thank you. As long as everyone’s clear that the speed limit is O(1) bits/generation (over long stretches? on average?) and not necessarily precisely 1 bit no matter what, I’m happy.
Scott: “What you’re saying—correct me if I’m wrong—is that biological evolution never discovered [error-correcting codes]...[O]n top of that, we can’t have the error-correcting code be too good, since otherwise we’ll suppress beneficial mutations!”
Whoa—that’s really helpful. Scott, as usual, you’ve broken through all the troubling and confusing jargon (“equilibrium? durr...”) so that some poor schmuck like me can actually see the main point. Thanks. =)
But of course, evolution itself is a sort of crude error-correcting code—and one that discriminates between beneficial mutations and detrimental ones! So here’s my question: Can you actually do asymptotically better than natural natural selection by applying an error-correcting code that doesn’t hamper beneficial mutations? Or is natural selection (plus local error-correction of the form existing in DNA) optimal?
“‘Life spends a lot of time in non-equilibrium states as well, and those are the states in which evolution can operate most quickly.’
Yes, but they must be balanced by states where it operates more slowly. You can certainly have a situation where 1.5 bits are added in odd years and .5 bits in even years, but it’s a wash: you still get 1 bit/year long term.”
This seems to contradict your earlier assertion that the 1 bit/generation rate is “an upper bound, not an average.” It seems to me to be more analogous to a roulette wheel or the Second Law of Thermodynamics (relax! I’m not about to make a creationist argument just ’cause I said that!), so a gene pool can certainly acquire more than 1.36 bits (or whatever the actual figure is) in some generations, but in the long run “the house always wins.”
“The factor due to redundant coding sequences is 1.36 (1.4 bits/base instead of 2.0). This does increase the amount of storable information, because it makes the degenerative pressure (mutation) work less efficiently. Then again, it’s only a factor of 35%, so the conclusion is still basically the same.”
Thank you. As long as everyone’s clear that the speed limit is O(1) bits/generation (over long stretches? on average?) and not necessarily precisely 1 bit no matter what, I’m happy.
Scott: “What you’re saying—correct me if I’m wrong—is that biological evolution never discovered [error-correcting codes]...[O]n top of that, we can’t have the error-correcting code be too good, since otherwise we’ll suppress beneficial mutations!”
Whoa—that’s really helpful. Scott, as usual, you’ve broken through all the troubling and confusing jargon (“equilibrium? durr...”) so that some poor schmuck like me can actually see the main point. Thanks. =)
But of course, evolution itself is a sort of crude error-correcting code—and one that discriminates between beneficial mutations and detrimental ones! So here’s my question: Can you actually do asymptotically better than natural natural selection by applying an error-correcting code that doesn’t hamper beneficial mutations? Or is natural selection (plus local error-correction of the form existing in DNA) optimal?