I’ve recently developed the suspicion that the Turing test (comparing AI with a standard human) could get passed by a narrow AI finely tuned to that task.
This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test. That is, there must be some hidden property that fulfills the following criteria:
Human minds possess it, narrow-focus AIs do not.
The property produces observable effects, and its existence could be inferred from observing these effects with a high degree of sensitivity and specificity.
The Turing Test does not already take these observable effects into account.
So, a). what is this property whose existence you are proposing, and b). how would we test for its presence and absence ?
The most popular answers are “consciousness” and “I don’t know”, but I find these unsatisfactory. Firstly, no one seems to have a definition of “consciousness” that isn’t circular (i.e., “you know, it’s that thing that humans have but AIs don’t”) or a priori unfalsifiable (“it’s your immortal soul !”). Secondly, if you can’t test for the presence or absence of a thing, then you might as well ignore it, since, as far as you know, it doesn’t actually do anything.
The slightly less popular answers to (b) are all along the lines of, “let’s make the agent perform some specific creative task that some humans are good at, such as composing a poem, painting a picture, dancing a tango, etc.”. Unfortunately, such tests would produce too many false negatives. I personally cannot do any of the things I listed above, and yet I’m pretty sure I’m human. Or am I ? How would you know ?
This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test.
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
Once again, these tests provide too many false negatives. Blind people cannot recognize videos, either (though, oddly enough, existing computer vision systems can); even sighted people can have trouble telling what’s going on, if the video is in a foreign language and depicts a foreign culture. And few people are capable of managing humans; I know I personally can’t do it, for example.
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
How would you know, ahead of time, what functions the AI was optimized to handle, or even whether you were talking to an AI in the first place ? If you knew the answer to that, you wouldn’t need any tests, Turing or otherwise; you’d already have the answer.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human, unless you could see them in person and verify that they were made of meat just like you and me. Well, actually, not me. You only have my word for it that I’m human, and you’ve never seen me watching a cat video, so I could very well be an AI.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human
It holds AIs to a higher standard, yes. But one of the points of the Turing test was not that any intelligent computer could pass it, but that any computer who passed it was intelligent.
This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test. That is, there must be some hidden property that fulfills the following criteria:
Human minds possess it, narrow-focus AIs do not.
The property produces observable effects, and its existence could be inferred from observing these effects with a high degree of sensitivity and specificity.
The Turing Test does not already take these observable effects into account.
So, a). what is this property whose existence you are proposing, and b). how would we test for its presence and absence ?
The most popular answers are “consciousness” and “I don’t know”, but I find these unsatisfactory. Firstly, no one seems to have a definition of “consciousness” that isn’t circular (i.e., “you know, it’s that thing that humans have but AIs don’t”) or a priori unfalsifiable (“it’s your immortal soul !”). Secondly, if you can’t test for the presence or absence of a thing, then you might as well ignore it, since, as far as you know, it doesn’t actually do anything.
The slightly less popular answers to (b) are all along the lines of, “let’s make the agent perform some specific creative task that some humans are good at, such as composing a poem, painting a picture, dancing a tango, etc.”. Unfortunately, such tests would produce too many false negatives. I personally cannot do any of the things I listed above, and yet I’m pretty sure I’m human. Or am I ? How would you know ?
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
Once again, these tests provide too many false negatives. Blind people cannot recognize videos, either (though, oddly enough, existing computer vision systems can); even sighted people can have trouble telling what’s going on, if the video is in a foreign language and depicts a foreign culture. And few people are capable of managing humans; I know I personally can’t do it, for example.
How would you know, ahead of time, what functions the AI was optimized to handle, or even whether you were talking to an AI in the first place ? If you knew the answer to that, you wouldn’t need any tests, Turing or otherwise; you’d already have the answer.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human, unless you could see them in person and verify that they were made of meat just like you and me. Well, actually, not me. You only have my word for it that I’m human, and you’ve never seen me watching a cat video, so I could very well be an AI.
It holds AIs to a higher standard, yes. But one of the points of the Turing test was not that any intelligent computer could pass it, but that any computer who passed it was intelligent.