It’s likely possible to engineer away mutations just by checking. ECC memory already has an error rate nine orders of magnitude better than human DNA, and with better error correction you could probably get the error rate low enough that less than one error happens in the expected number of nanobots that will ever exist. ECC is not the kind of checking for which the checking process can be disabled, as the memory module always processes raw bits into error-corrected bits, which fails unless it matches some checksum which can be made astronomically unlikely to happen in a mutation.
You’re very right! I didn’t really think of that. I had the intuition that mutation is very hard to avoid since cancer is very hard to avoid, but maybe it’s not that really accurate.
Thinking a bit more, it does seem unlikely that a mutation can disable the checking process itself, if the checking process is well designed with checksums.
One idea is that the meaning of each byte (or “base pair” in our DNA analogy), changes depending on the checksum of previous bytes. This way, if one byte is mutated, the meaning of every next byte changes (e.g. “hello” becomes “ifmmp”), rendering the entire string of instructions useless. The checking process itself cannot break in any way to compensate for this. It has to break in such a way that it won’t update its checksum for this one new byte, but still will update its checksum for all other bytes, which is very unlikely. If it simply disables checksums, all bytes become illegible (like encryption). I use the word “byte” very abstractly—it could be any unit of information.
And yes, error correction code could further improve things by allowing a few mutations to get corrected without making the nanobot self destruct.
It’s still possible the hierarchical idea in my post has advantages over checksums. It theoretically only slows down self replication when a nanobot retrieves its instructions the first time, not when a nanobot uses its instructions.
Maybe a compromise is that there is only one level of master nanobots, and they are allowed to replicate the master copy given that they use checksums. But they still use these master copies to install simple copies in other nanobots which do not need checksums.
I admit, maybe a slight difference in self replication efficiency doesn’t matter. Exponential growth might be so fast that over-engineering the self replication speed is a waste of time. Choosing a simpler system that can be engineered and set up sooner might be wiser.
I agree that the hierarchical idea (and any master copy idea) might end up being overkill. I don’t see it as a very big idea myself.
It’s likely possible to engineer away mutations just by checking. ECC memory already has an error rate nine orders of magnitude better than human DNA, and with better error correction you could probably get the error rate low enough that less than one error happens in the expected number of nanobots that will ever exist. ECC is not the kind of checking for which the checking process can be disabled, as the memory module always processes raw bits into error-corrected bits, which fails unless it matches some checksum which can be made astronomically unlikely to happen in a mutation.
You’re very right! I didn’t really think of that. I had the intuition that mutation is very hard to avoid since cancer is very hard to avoid, but maybe it’s not that really accurate.
Thinking a bit more, it does seem unlikely that a mutation can disable the checking process itself, if the checking process is well designed with checksums.
One idea is that the meaning of each byte (or “base pair” in our DNA analogy), changes depending on the checksum of previous bytes. This way, if one byte is mutated, the meaning of every next byte changes (e.g. “hello” becomes “ifmmp”), rendering the entire string of instructions useless. The checking process itself cannot break in any way to compensate for this. It has to break in such a way that it won’t update its checksum for this one new byte, but still will update its checksum for all other bytes, which is very unlikely. If it simply disables checksums, all bytes become illegible (like encryption). I use the word “byte” very abstractly—it could be any unit of information.
And yes, error correction code could further improve things by allowing a few mutations to get corrected without making the nanobot self destruct.
It’s still possible the hierarchical idea in my post has advantages over checksums. It theoretically only slows down self replication when a nanobot retrieves its instructions the first time, not when a nanobot uses its instructions.
Maybe a compromise is that there is only one level of master nanobots, and they are allowed to replicate the master copy given that they use checksums. But they still use these master copies to install simple copies in other nanobots which do not need checksums.
I admit, maybe a slight difference in self replication efficiency doesn’t matter. Exponential growth might be so fast that over-engineering the self replication speed is a waste of time. Choosing a simpler system that can be engineered and set up sooner might be wiser.
I agree that the hierarchical idea (and any master copy idea) might end up being overkill. I don’t see it as a very big idea myself.