I’m not saying we’ll take the genome and read it to figure out how the brain does what it does, I’m saying that we run a brain simulation and do science (experiments) on it and study how it works, similarly how we study how DNA transcription or ATP production or muscle contraction or a neuron’s ion pumps or the Krebs cycle or honeybee communication or hormone release or cell division or the immune system or chick begging or the heart’s pacemaker work. There are a lot of things evolution hasn’t obfuscated so much that we haven’t been able to figure out what they’re doing. Of course there’s also a lot of things we don’t understand yet, but I don’t see how that leads to the conclusion that evolution is generally obfuscatory.
I guess it tends to create physical structures that are simple, but I think the computational stuff tends to be weird. If you have a strand of DNA, the only way to tell what kind of chemistry that will result in is to run it. From what little I’ve heard, it sounds like any sort of program made by a genetic algorithm that can actually run is too crazy to understand. For example, I’ve heard of a set of transistors hooked together to be able to tell “yes” and “no” apart, or something like that. There were transistors that were just draining energy, but were vital. Running it on another set of transistors wouldn’t work. It required the exact specs of those transistors. That being said, the sort of sources I hear that from are also the kind that say ridiculous things about quantum physics, so I guess I’ll need an expert to tell me if that’s true.
The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest—with no pathways that would allow them to influence the output—yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
This has been repeated many times in different domains where machines are used to design something. The output is usually really hard to understand, whether it be code, mathematical formulas, neural network weights, transistors, etc. Of course reverse engineering code in general is difficult, it may not be any specific problem with GAs.
Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
This makes an interesting contrast with biological evolution. The “programs” it comes up with do run quite reliably when loaded onto other organisms of the same type. If fact, the parts of slightly different programs from different individuals can be jumbled together at random and it still works! Often, you can take a component from one organism and insert it into very distantly related one and it still works! On top of that, organisms are very clearly made of parts with specialised, understandable purposes, unlike what you typically see when you look inside a trained neural network.
How does this happen? Can this level of robustness and understandability be produced in artificially evolved systems?
Well the FPGA is a closer analogy to the environment for the organisms. Organisms were heavily optimized for that specific environment. It would be like if you took a species of fish that only ever lived in a specific lake, and put them into a different lake that had a slightly higher PH, and they weren’t able to survive as well.
But I don’t disagree with your general point, evolution is surprisingly robust. Geoffrey Hinton has a very interesting theory about this here. That sexual reproduction forces genes to randomly recombine each generation, and so it prevents complicated co-dependencies between multiple genes.
He applies a similar principle to neural networks and shows it vastly improves their performance (the method is now widely used to regularize NNs.) Presumably it also makes them far more understandable like you mention, since each neuron is forced to provide useful outputs on it’s own, without being able to depend on other neurons.
I’m not saying we’ll take the genome and read it to figure out how the brain does what it does, I’m saying that we run a brain simulation and do science (experiments) on it and study how it works, similarly how we study how DNA transcription or ATP production or muscle contraction or a neuron’s ion pumps or the Krebs cycle or honeybee communication or hormone release or cell division or the immune system or chick begging or the heart’s pacemaker work. There are a lot of things evolution hasn’t obfuscated so much that we haven’t been able to figure out what they’re doing. Of course there’s also a lot of things we don’t understand yet, but I don’t see how that leads to the conclusion that evolution is generally obfuscatory.
I guess it tends to create physical structures that are simple, but I think the computational stuff tends to be weird. If you have a strand of DNA, the only way to tell what kind of chemistry that will result in is to run it. From what little I’ve heard, it sounds like any sort of program made by a genetic algorithm that can actually run is too crazy to understand. For example, I’ve heard of a set of transistors hooked together to be able to tell “yes” and “no” apart, or something like that. There were transistors that were just draining energy, but were vital. Running it on another set of transistors wouldn’t work. It required the exact specs of those transistors. That being said, the sort of sources I hear that from are also the kind that say ridiculous things about quantum physics, so I guess I’ll need an expert to tell me if that’s true.
Has anyone here studied evolved computers?
The story you are referring to is On the Origin of Circuits.
This has been repeated many times in different domains where machines are used to design something. The output is usually really hard to understand, whether it be code, mathematical formulas, neural network weights, transistors, etc. Of course reverse engineering code in general is difficult, it may not be any specific problem with GAs.
This makes an interesting contrast with biological evolution. The “programs” it comes up with do run quite reliably when loaded onto other organisms of the same type. If fact, the parts of slightly different programs from different individuals can be jumbled together at random and it still works! Often, you can take a component from one organism and insert it into very distantly related one and it still works! On top of that, organisms are very clearly made of parts with specialised, understandable purposes, unlike what you typically see when you look inside a trained neural network.
How does this happen? Can this level of robustness and understandability be produced in artificially evolved systems?
Well the FPGA is a closer analogy to the environment for the organisms. Organisms were heavily optimized for that specific environment. It would be like if you took a species of fish that only ever lived in a specific lake, and put them into a different lake that had a slightly higher PH, and they weren’t able to survive as well.
But I don’t disagree with your general point, evolution is surprisingly robust. Geoffrey Hinton has a very interesting theory about this here. That sexual reproduction forces genes to randomly recombine each generation, and so it prevents complicated co-dependencies between multiple genes.
He applies a similar principle to neural networks and shows it vastly improves their performance (the method is now widely used to regularize NNs.) Presumably it also makes them far more understandable like you mention, since each neuron is forced to provide useful outputs on it’s own, without being able to depend on other neurons.