There’s nothing wrong with “empirical” research in computer programs, especially with complex systems. If you can get something that is closer to what you want, you can study its behavior and analyze the results, looking for patterns or failures in order to design a better version.
I know Eliezer hates the word “emergent”, but the emergent properties of complex systems are very difficult to theorize about without observation or simulation, and with computer programs there’s precious little difference between those and just running the damn program. Could you design a glider gun after reading the rules of Conway’s game of life, without ever having run it?
It’s no way to write a safely self-modifying AI, to be sure, but it might be a valid research tool with which to gain insight on the overall problem of AI.
There’s nothing wrong with “empirical” research in computer programs, especially with complex systems. If you can get something that is closer to what you want, you can study its behavior and analyze the results, looking for patterns or failures in order to design a better version.
I know Eliezer hates the word “emergent”, but the emergent properties of complex systems are very difficult to theorize about without observation or simulation, and with computer programs there’s precious little difference between those and just running the damn program. Could you design a glider gun after reading the rules of Conway’s game of life, without ever having run it?
It’s no way to write a safely self-modifying AI, to be sure, but it might be a valid research tool with which to gain insight on the overall problem of AI.