So here’s my question: Can you actually do asymptotically better than natural natural selection by applying an error-correcting code that doesn’t hamper beneficial mutations?
In principle, yes. In a given generation, all we want is a mutation rate that’s nonzero, but below the rate that natural selection can correct. That way we can maintain a steady state indefinitely (if we’re indeed at a local optimum), but still give beneficial mutations a chance to take over.
Now with DNA, the mutation rate is fixed at ~10^-8. Since we need to be able to weed out bad mutations, this imposes an upper bound of ~10^8 on the number of functional base pairs. But there’s nothing special mathematically about the constant 10^-8 -- that (unless I’m mistaken) is just an unwelcome intruder from physics and chemistry. So by using an error-correcting code, could we make the “effective mutation rate” nonzero, but as far below 10^-8 as we wanted?
Indeed we could! Here’s my redesigned, biology-beating DNA that achieves this. Suppose we want to simulate a mutation rate ε<also stick in “parity-check pairs” from a good error-correcting code. These parity-check pairs let us correct as many mutations as we want, with only a tiny probability of failure.
Next we let the physics and chemistry of DNA do their work, and corrupt a 10^-8 fraction of the base pairs. And then, using exotic cellular machinery whose existence we get to assume, we read the error syndrome off the parity-check pairs, and use it to undo all but one mutation in the unencoded, functional pairs. But how do we decide which mutation gets left around for evolution’s sake? We just pick it at random! (If we need random bits, we can just extract them from the error syndrome—the cosmic rays or whatever it is that cause the physical mutations kindly provide us with a source of entropy.)
So here’s my question: Can you actually do asymptotically better than natural natural selection by applying an error-correcting code that doesn’t hamper beneficial mutations?
In principle, yes. In a given generation, all we want is a mutation rate that’s nonzero, but below the rate that natural selection can correct. That way we can maintain a steady state indefinitely (if we’re indeed at a local optimum), but still give beneficial mutations a chance to take over.
Now with DNA, the mutation rate is fixed at ~10^-8. Since we need to be able to weed out bad mutations, this imposes an upper bound of ~10^8 on the number of functional base pairs. But there’s nothing special mathematically about the constant 10^-8 -- that (unless I’m mistaken) is just an unwelcome intruder from physics and chemistry. So by using an error-correcting code, could we make the “effective mutation rate” nonzero, but as far below 10^-8 as we wanted?
Indeed we could! Here’s my redesigned, biology-beating DNA that achieves this. Suppose we want to simulate a mutation rate ε<also stick in “parity-check pairs” from a good error-correcting code. These parity-check pairs let us correct as many mutations as we want, with only a tiny probability of failure.
Next we let the physics and chemistry of DNA do their work, and corrupt a 10^-8 fraction of the base pairs. And then, using exotic cellular machinery whose existence we get to assume, we read the error syndrome off the parity-check pairs, and use it to undo all but one mutation in the unencoded, functional pairs. But how do we decide which mutation gets left around for evolution’s sake? We just pick it at random! (If we need random bits, we can just extract them from the error syndrome—the cosmic rays or whatever it is that cause the physical mutations kindly provide us with a source of entropy.)