This implies that there must be some way to distinguish a human mind from the AI, besides the the Turing Test.
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
Once again, these tests provide too many false negatives. Blind people cannot recognize videos, either (though, oddly enough, existing computer vision systems can); even sighted people can have trouble telling what’s going on, if the video is in a foreign language and depicts a foreign culture. And few people are capable of managing humans; I know I personally can’t do it, for example.
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
How would you know, ahead of time, what functions the AI was optimized to handle, or even whether you were talking to an AI in the first place ? If you knew the answer to that, you wouldn’t need any tests, Turing or otherwise; you’d already have the answer.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human, unless you could see them in person and verify that they were made of meat just like you and me. Well, actually, not me. You only have my word for it that I’m human, and you’ve never seen me watching a cat video, so I could very well be an AI.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human
It holds AIs to a higher standard, yes. But one of the points of the Turing test was not that any intelligent computer could pass it, but that any computer who passed it was intelligent.
Maybe the AI lacks the ability to learn any skills in a non-linguistic way—it could never recognise videos, merely linguistic descriptions of this. Maybe it’s incapable of managing actual humans (though it can spout some half-decent management theory if pressed linguistically).
I’d say a general AI should be tested using some test that it wasn’t optimised for/trained on.
Once again, these tests provide too many false negatives. Blind people cannot recognize videos, either (though, oddly enough, existing computer vision systems can); even sighted people can have trouble telling what’s going on, if the video is in a foreign language and depicts a foreign culture. And few people are capable of managing humans; I know I personally can’t do it, for example.
How would you know, ahead of time, what functions the AI was optimized to handle, or even whether you were talking to an AI in the first place ? If you knew the answer to that, you wouldn’t need any tests, Turing or otherwise; you’d already have the answer.
In general, it sounds to me like your test is over-fitting. It would basically force you to treat anyone as non-human, unless you could see them in person and verify that they were made of meat just like you and me. Well, actually, not me. You only have my word for it that I’m human, and you’ve never seen me watching a cat video, so I could very well be an AI.
It holds AIs to a higher standard, yes. But one of the points of the Turing test was not that any intelligent computer could pass it, but that any computer who passed it was intelligent.