nearly everyone’s heard of the Turing test. So the first machines to pass the test will be dedicated systems, specifically designed to get through the test.
The problem with this line of reasoning is that the Turing test is very open-ended. You have no idea what a bunch of humans will want to talk to your machine about. Maybe about God, maybe about love, maybe about remembering your first big bloody scrape as a kid… Maybe your machine will get some moral puzzles, maybe logical paradoxes, maybe some nonsense.
And once a machine is truly able to sustain a long conversation on any topic, well, at this point we get back to the interesting question of what does “intelligent” mean.
The problem with this line of reasoning is that the Turing test is very open-ended. You have no idea what a bunch of humans will want to talk to your machine about. Maybe about God, maybe about love, maybe about remembering your first big bloody scrape as a kid… Maybe your machine will get some moral puzzles, maybe logical paradoxes, maybe some nonsense.
This was more of a challenge before the web, with its trillions of lines of text on all subjects. Because of this, I don’t consider the text based test as that good anymore—a true open ended test would need to deviate from this text-based format nowadays.
But you can keep on adding specifics to a subject until you arrive at something novel. I don’t think it would even be that hard: just Google the key phrases of whatever you’re about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.
a true open ended test would need to deviate from this text-based format nowadays.
Where does this leave mute humans, or partially paralyzed humans, or any other kind of human who can’t verbally speak your language ? If we still classify them as “human”, then what reason do you have for rejecting the AI ?
The Turing test retains validity as a general test, on all systems that are not specifically optimised to pass the test.
For instance, the Turing test is good for checking whether whole brain emulations are conscious. Conversation is enough to check that humans are conscious (and if a dog or dolphin managed conversation, it would work as a test for them as well).
This is a circular argument, IMO. How can you tell whether you’re talking to a whole brain emulation, or a bot designed to mimic a whole brain emulation ?
By knowing its provenance. Maybe, when we get more sophisticated and knowledgeable about these things, by looking at its code.
In humans, when assessing whether they’re lying or not, then knowing the details of their pasts (especially, for instance, knowing if they were trained to lie professionally or not) should affect your assessment of their performance.
The problem with this line of reasoning is that the Turing test is very open-ended. You have no idea what a bunch of humans will want to talk to your machine about. Maybe about God, maybe about love, maybe about remembering your first big bloody scrape as a kid… Maybe your machine will get some moral puzzles, maybe logical paradoxes, maybe some nonsense.
And once a machine is truly able to sustain a long conversation on any topic, well, at this point we get back to the interesting question of what does “intelligent” mean.
This was more of a challenge before the web, with its trillions of lines of text on all subjects. Because of this, I don’t consider the text based test as that good anymore—a true open ended test would need to deviate from this text-based format nowadays.
But you can keep on adding specifics to a subject until you arrive at something novel. I don’t think it would even be that hard: just Google the key phrases of whatever you’re about to say, and if you get back results that could be smooshed into a coherent answer, then you need to keep changing up or complicating.
Where does this leave mute humans, or partially paralyzed humans, or any other kind of human who can’t verbally speak your language ? If we still classify them as “human”, then what reason do you have for rejecting the AI ?
here
That’s why the test only offers a sufficient condition for intelligence (not a necessary one) - at least that’s the standard view.
The Turing test retains validity as a general test, on all systems that are not specifically optimised to pass the test.
For instance, the Turing test is good for checking whether whole brain emulations are conscious. Conversation is enough to check that humans are conscious (and if a dog or dolphin managed conversation, it would work as a test for them as well).
This is a circular argument, IMO. How can you tell whether you’re talking to a whole brain emulation, or a bot designed to mimic a whole brain emulation ?
By knowing its provenance. Maybe, when we get more sophisticated and knowledgeable about these things, by looking at its code.
In humans, when assessing whether they’re lying or not, then knowing the details of their pasts (especially, for instance, knowing if they were trained to lie professionally or not) should affect your assessment of their performance.