does the ability to pass for a human correlate with [qualities] which we vaguely describe as “intelligent” or “conscious”[?]
I always thought (and was very convinced in my belief, though I can’t seem to think of a reason why now) that the Turing test was explicitly designed as a “sufficient” rather than a “necessary” kind of test. As in, you don’t need to pass it to be “human-level”, but if you do then you certainly are. (Or, more precisely, as long as we’ve established we can’t tell, then who cares? With a similar sentiment for exactly what it was we’re comparing for “human-level”: it’s something about how smarter we are than monkeys, we’re not sure quite what it is, but we can’t tell the difference, so you’re in.) A brute-force, first-try, upper-bound sort of test.
But I get the feeling from some of the comments that it claims more than that (or maybe doesn’t disclaim as much). Am I missing some literature or something?
I personally agree with your comment (assuming I understand it correctly). As far as I can tell, however, some people believe that merely being able to converse with humans on their own level is not sufficient to establish the agent’s ability to think on the human level. I personally think this belief is misguided, since it privileges implementation details over function, but I could always be wrong.
I always thought (and was very convinced in my belief, though I can’t seem to think of a reason why now) that the Turing test was explicitly designed as a “sufficient” rather than a “necessary” kind of test. As in, you don’t need to pass it to be “human-level”, but if you do then you certainly are. (Or, more precisely, as long as we’ve established we can’t tell, then who cares? With a similar sentiment for exactly what it was we’re comparing for “human-level”: it’s something about how smarter we are than monkeys, we’re not sure quite what it is, but we can’t tell the difference, so you’re in.) A brute-force, first-try, upper-bound sort of test.
But I get the feeling from some of the comments that it claims more than that (or maybe doesn’t disclaim as much). Am I missing some literature or something?
I personally agree with your comment (assuming I understand it correctly). As far as I can tell, however, some people believe that merely being able to converse with humans on their own level is not sufficient to establish the agent’s ability to think on the human level. I personally think this belief is misguided, since it privileges implementation details over function, but I could always be wrong.
IIRC, Turing introduces the concept in the paper as a sufficient but not necessary condition, as you describe here.
I feel it may be neither necessary nor sufficient. It would be a pretty strong indication, but wouldn’t be enough on its own.