Wiseman, let M be the number of “functional” base pairs that get mutated per generation, and let C be the number of those base pairs that natural selection can affect per generation. Then if M>>C, the problem is that the base pairs will become mostly (though not entirely) random junk, regardless of what natural selection does. This is a point about random walks that has nothing to do with biology.
To illustrate, suppose we have an n-bit string. At every time step, we can change one of the bits to anything we want, but then two bits get chosen at random and inverted. Question: in “steady state,” how many of the bits can we ensure are 0?
I claim the answer is only 3n/4. For suppose pn of the bits are 1, and that we always pick a ‘1’ bit and change it to 0. Then the expected change in the number of bits after a single time step is
So we get an either/or behavior: either the mutation rate is small enough that we can keep the functional DNA pretty much exactly as it is, or else a huge fraction of the “functional” DNA becomes randomized. In other words, there’s no intermediate regime where the functional DNA keeps mutating around within some “interesting” region of configuration space, without spilling out into a huge region of configuration space.
Note that for the above argument, I’ve assumed that the “interesting” regions of DNA configuration space are necessarily small—and in particular, that they can’t correspond to Hamming balls of size c*n where c is a constant. This assumption is basically a restatement of our earlier observation that natural selection hasn’t discovered error-correcting codes. As such, it seems to me to be pretty secure biologically. But if this assumption fails then so does my argument.
Wiseman, let M be the number of “functional” base pairs that get mutated per generation, and let C be the number of those base pairs that natural selection can affect per generation. Then if M>>C, the problem is that the base pairs will become mostly (though not entirely) random junk, regardless of what natural selection does. This is a point about random walks that has nothing to do with biology.
To illustrate, suppose we have an n-bit string. At every time step, we can change one of the bits to anything we want, but then two bits get chosen at random and inverted. Question: in “steady state,” how many of the bits can we ensure are 0?
I claim the answer is only 3n/4. For suppose pn of the bits are 1, and that we always pick a ‘1’ bit and change it to 0. Then the expected change in the number of bits after a single time step is
D(p) = (1/4) [p^2 (-3) + 2p(1-p) (-1) + (1-p)^2 (1)].
Setting D(p)=0 and solving yields p=1/4.
So we get an either/or behavior: either the mutation rate is small enough that we can keep the functional DNA pretty much exactly as it is, or else a huge fraction of the “functional” DNA becomes randomized. In other words, there’s no intermediate regime where the functional DNA keeps mutating around within some “interesting” region of configuration space, without spilling out into a huge region of configuration space.
Note that for the above argument, I’ve assumed that the “interesting” regions of DNA configuration space are necessarily small—and in particular, that they can’t correspond to Hamming balls of size c*n where c is a constant. This assumption is basically a restatement of our earlier observation that natural selection hasn’t discovered error-correcting codes. As such, it seems to me to be pretty secure biologically. But if this assumption fails then so does my argument.