Can we, as rational individuals, get over the Turing Test? The first computer program to pass it was “ELIZA”, written in 1968, although the test was conducted informally and contested by some—the Loebner Prize is awarded every year to the computer that best “passes” the poorly-defined test, and increasing familiarity of people with computers and computers’ limitations would make the test harder to pass even year even if everything else remained constant (it hasn’t—the topics allowed have expanded, and the length of the conversations has increased, since the contest started. Nonetheless, human beings have reliably been fooled about whether or not they are conversing with a computer or a human being since the late 60′s.
It doesn’t test what it purports to test; at best it tests the humans conducting it, who often fail to even correctly identify human beings on the other end of the console. It is also a -terrible- test for intelligence in an AI, since it tests the ability of the AI to lie about being human, rather than its ability to think. (Quick, what’s 2^11 / 5^5 rounded to the nearest thousandth? The computers in the room have just been revealed, not for their inability to work at a human’s level, but by human inability to work at a computer’s level.)
Can we, as rational individuals, get over the Turing Test? The first computer program to pass it was “ELIZA”, written in 1968, although the test was conducted informally and contested by some—the Loebner Prize is awarded every year to the computer that best “passes” the poorly-defined test, and increasing familiarity of people with computers and computers’ limitations would make the test harder to pass even year even if everything else remained constant (it hasn’t—the topics allowed have expanded, and the length of the conversations has increased, since the contest started. Nonetheless, human beings have reliably been fooled about whether or not they are conversing with a computer or a human being since the late 60′s.
It doesn’t test what it purports to test; at best it tests the humans conducting it, who often fail to even correctly identify human beings on the other end of the console. It is also a -terrible- test for intelligence in an AI, since it tests the ability of the AI to lie about being human, rather than its ability to think. (Quick, what’s 2^11 / 5^5 rounded to the nearest thousandth? The computers in the room have just been revealed, not for their inability to work at a human’s level, but by human inability to work at a computer’s level.)