Actually, these mathematician’s bits are very close to bits on a hard drive. Genomes, so far as I know, have no ability to determine what the next base ought logically to be; there is no logical processing in a ribosome. Selection pressure has to support each physical DNA base against the degenerative pressure of copying errors. Unless changing the DNA base has no effect on the organism’s fitness (a neutral mutation), the “one mutation, one death” rule comes into play.
Now certainly, once the brain is constructed and patterned, there are billions of neurons, all of them playing a functional role, and once these neurons are exposed to the environment, the algorithmic complexity will begin to actually increase. But the core learning algorithms must still in principle be specifiable in 25 megabytes. There may not be junk neurons, but there is surely junk DNA.
Now, even junk DNA may help, in a certain sense, because the metabolic load of DNA is tiny, and the more junk DNA you have, the more crossover you can do with a smaller probability of swapping in the middle of a coding gene. This “function” of junk DNA does not depend on its information content, so it doesn’t have to be supported against the degenerative pressure of a per-base probability of copying error.
To sum up: The mathematician’s bits here are very close to bits on a hard drive, because every DNA base that matters has to be supported by “one mutation, one death” to overcome per-base copying errors.
Aaronson, McCabe:
Actually, these mathematician’s bits are very close to bits on a hard drive. Genomes, so far as I know, have no ability to determine what the next base ought logically to be; there is no logical processing in a ribosome. Selection pressure has to support each physical DNA base against the degenerative pressure of copying errors. Unless changing the DNA base has no effect on the organism’s fitness (a neutral mutation), the “one mutation, one death” rule comes into play.
Now certainly, once the brain is constructed and patterned, there are billions of neurons, all of them playing a functional role, and once these neurons are exposed to the environment, the algorithmic complexity will begin to actually increase. But the core learning algorithms must still in principle be specifiable in 25 megabytes. There may not be junk neurons, but there is surely junk DNA.
Now, even junk DNA may help, in a certain sense, because the metabolic load of DNA is tiny, and the more junk DNA you have, the more crossover you can do with a smaller probability of swapping in the middle of a coding gene. This “function” of junk DNA does not depend on its information content, so it doesn’t have to be supported against the degenerative pressure of a per-base probability of copying error.
To sum up: The mathematician’s bits here are very close to bits on a hard drive, because every DNA base that matters has to be supported by “one mutation, one death” to overcome per-base copying errors.