The recent news of Eugene Goostman passing the turing test has raised all kinds of debate around what counts as passing the turing tests and whether or not the chatbots are attempting to do it properly.
The Turing Test is not a good criterion for machine intelligence.
The computer’s capability to trick humans via imitation in a conversation is an amazing prediction. I think it’s important to discern what it means.
For a machine to be able to imitate human language and reasoning succesfully it would essentially need to have intelligence much above the average human. A general intelligence not specifically designed to be a fake human would need to be able to model human behavior and derive from that model communication that was misleading from the AI’s “true nature”.
Computers supremacy over humans in the boardgame of chess has been a common motif in AI discussion since Kasparov lost to Deep Blue in the 90′s. Yet no one is trying to claim that the chesscalculators would have learned to play chess in a similar fashion to humans and would rely on a similar logic as we do. I’m not an expert on programming, AI nor chess, but it stills seems obvious that it would be improper to use thecomputers’ current superiority to humans as a solid proof of their high general intelligence—one that is capable of imitating humans and play chess like humans do.
Goal structure for deception vs. Crafted set of tricks and “repeat after me”
For an AI to truly participate in the Turing Test it would need to be self-aware. In addition to being self-aware a goalstructure would be required, and that should include incentive to deceive humans to think that the AI is a human too. More specifically cognitively pretending not to be you, would require self-awareness. This would be very sophisticated and subtle. It’s hard for many humans to pretend being someone else—though some excel at it - despite us having a built-in capacity for empathy and already having nearly identical brains. To do the same with an internal “mental” structure that might not be anything like ours would in my opinion require an intelligence on level above the average human or a designed set of tricks.
Are the “Artificial Intelligences” that attempt to pass the Turing Test intelligent at all? To me it seems that the chatbots are essentially one-trick-ponies that merely “repeat after me”. Somebody carefully designs an automated way of picking words that tricks the average joe by avoiding conversation or interaction of substance. Computer’s vast capacity for storage and recall make them good for memorizing a lot of tricks.
What is actually being done in the Turing Test is not a measurement of intelligence. It is an attempt to find an automated means for tricking a human to think that they’re talking to someone else, which does not require an intelligent agent. This seems similar to having a really convincing answering machine for your telephone.
Turing Test and Machine Intelligence
The recent news of Eugene Goostman passing the turing test has raised all kinds of debate around what counts as passing the turing tests and whether or not the chatbots are attempting to do it properly.
The Turing Test is not a good criterion for machine intelligence.
The computer’s capability to trick humans via imitation in a conversation is an amazing prediction. I think it’s important to discern what it means.
For a machine to be able to imitate human language and reasoning succesfully it would essentially need to have intelligence much above the average human. A general intelligence not specifically designed to be a fake human would need to be able to model human behavior and derive from that model communication that was misleading from the AI’s “true nature”.
Computers supremacy over humans in the boardgame of chess has been a common motif in AI discussion since Kasparov lost to Deep Blue in the 90′s. Yet no one is trying to claim that the chesscalculators would have learned to play chess in a similar fashion to humans and would rely on a similar logic as we do. I’m not an expert on programming, AI nor chess, but it stills seems obvious that it would be improper to use thecomputers’ current superiority to humans as a solid proof of their high general intelligence—one that is capable of imitating humans and play chess like humans do.
Goal structure for deception vs. Crafted set of tricks and “repeat after me”
For an AI to truly participate in the Turing Test it would need to be self-aware. In addition to being self-aware a goalstructure would be required, and that should include incentive to deceive humans to think that the AI is a human too. More specifically cognitively pretending not to be you, would require self-awareness. This would be very sophisticated and subtle. It’s hard for many humans to pretend being someone else—though some excel at it - despite us having a built-in capacity for empathy and already having nearly identical brains. To do the same with an internal “mental” structure that might not be anything like ours would in my opinion require an intelligence on level above the average human or a designed set of tricks.
Are the “Artificial Intelligences” that attempt to pass the Turing Test intelligent at all? To me it seems that the chatbots are essentially one-trick-ponies that merely “repeat after me”. Somebody carefully designs an automated way of picking words that tricks the average joe by avoiding conversation or interaction of substance. Computer’s vast capacity for storage and recall make them good for memorizing a lot of tricks.
What is actually being done in the Turing Test is not a measurement of intelligence. It is an attempt to find an automated means for tricking a human to think that they’re talking to someone else, which does not require an intelligent agent. This seems similar to having a really convincing answering machine for your telephone.