“It seems unlikely that you could have a genetic algorithm operate on a population of code and end up with a program that passes the Turing test”
Well, we have one case of it working, and that wasn’t even with the process being designed with the “pass the Turing test” specifically as a goal.
“because at each step the genetic algorithm (as an optimization procedure) needs to have some sense of what is more or less likely to pass the test.”
Having an automated process for determining with certainty that something passes the Turing test is quite stronger than merely having nonzero information. Suppose I’m trying to use a genetic algorithm to create a Halting Tester, and I have a Halting Tester that says that a program doesn’t halt. If I know that the program does, in fact, not halt after n steps (by simply running the program for n steps), that provides nonzero information about the efficacy of my Halting Tester. This suggests that I could create a genetic algorithm for creating Halting Testers (obviously, I couldn’t evolve a perfect Halting Tester, but perhaps I could evolve one that is “good enough”, given some standard). And who knows, maybe if I had such a genetic algorithm, not only would my Halting Testers evolve better Halting Testing, but since they are competing against each other, they would evolve better Tricking Other Halting Testers, and maybe that would eventually spawn AGI. I don’t find that inconceivable.
Well, we have one case of it working, and that wasn’t even with the process being designed with the “pass the Turing test” specifically as a goal.
Are you referring to the biological evolution of humans, or stuff like this?
Having an automated process for determining with certainty that something passes the Turing test is quite stronger than merely having nonzero information.
Right; how did you interpret “some sense of what is more or less likely to pass the test”?
I was referring to the biological evolution of humans; in your link, the process appears to have been designed with the Turing test in mind.
There’s probably going to be a lot of guesswork as for as what metrics for “more likely to pass” are best, but the process doesn’t have to be perfect, just good enough to generate intelligence. Obvious places to start would be complex games such as Go and poker, and replicating aspects of human evolution, such as simulating hunting and social maneuvering.
I was referring to the biological evolution of humans
Ok. When I said “you,” I meant modern humans operating on modern programming languages. I also don’t think it’s quite correct to equate actual historical evolution and genetic algorithms, for somewhat subtle technical reasons.
“It seems unlikely that you could have a genetic algorithm operate on a population of code and end up with a program that passes the Turing test”
Well, we have one case of it working, and that wasn’t even with the process being designed with the “pass the Turing test” specifically as a goal.
“because at each step the genetic algorithm (as an optimization procedure) needs to have some sense of what is more or less likely to pass the test.”
Having an automated process for determining with certainty that something passes the Turing test is quite stronger than merely having nonzero information. Suppose I’m trying to use a genetic algorithm to create a Halting Tester, and I have a Halting Tester that says that a program doesn’t halt. If I know that the program does, in fact, not halt after n steps (by simply running the program for n steps), that provides nonzero information about the efficacy of my Halting Tester. This suggests that I could create a genetic algorithm for creating Halting Testers (obviously, I couldn’t evolve a perfect Halting Tester, but perhaps I could evolve one that is “good enough”, given some standard). And who knows, maybe if I had such a genetic algorithm, not only would my Halting Testers evolve better Halting Testing, but since they are competing against each other, they would evolve better Tricking Other Halting Testers, and maybe that would eventually spawn AGI. I don’t find that inconceivable.
Are you referring to the biological evolution of humans, or stuff like this?
Right; how did you interpret “some sense of what is more or less likely to pass the test”?
I was referring to the biological evolution of humans; in your link, the process appears to have been designed with the Turing test in mind.
There’s probably going to be a lot of guesswork as for as what metrics for “more likely to pass” are best, but the process doesn’t have to be perfect, just good enough to generate intelligence. Obvious places to start would be complex games such as Go and poker, and replicating aspects of human evolution, such as simulating hunting and social maneuvering.
Ok. When I said “you,” I meant modern humans operating on modern programming languages. I also don’t think it’s quite correct to equate actual historical evolution and genetic algorithms, for somewhat subtle technical reasons.