I’m not entirely sure what you mean by “Turing Test”. As far as I understand, the test is not multiple choice; instead, you just converse with the test subject as best you can, then make your judgement. And, since the only judgements you can make are “human” and “non-human”, the test doesn’t tell you how well the test subject can solve urban navigation problems or whatever; all it tells you is how good the subject is at being human.
The trick, though, is that in order to converse on a human level, the test subject would have to implement at least some form of AGI, because this is what humans do. This does not mean that the AI would be able to actually solve any problem in front of it, but that’s ok, because neither can humans. The Turing test is designed to identify human-level AIs, not Singularity-grade quasi-godlike uber-minds.
You dismiss “chattering” as some sort of a merely “linguistic” trick, but that’s just an implementation detail. Who cares whether the AI runs on biological wetware or a “narrow statistical machine” ? If I can hold an engaging conversation with it, I’m going to keep talking until I get tired, statistics or no statistics. I get to have interesting conversations rarely enough as it is...
I understand what you’re saying, but I don’t understand why. I can come up with several different interpretations of your statement:
Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.
It’s possible for a chatterbot to appear generally intelligent without actually being generally intelligent.
You and I are talking about radically different things when we say “general intelligence”.
You and I are talking about radically different things when we say “chatting”.
To shed light on these points, here are some questions
Do you believe that a non-AGI chatterbot would be able to engage in a conversation with you that is very similar to the one you and I are having now ?
Admittedly, I am not all that intelligent and thus not a good test case. Do you believe that a non-AGI chatterbot could be built to emulate you personally, to the point where strangers talking with it on Less Wrong could not tell the difference between it and you ?
Ok, that gives me one reference point, let me see if I can narrow it down further:
Do you believe that humans are generally intelligent ? Do you believe that humans use their general intelligence in order to hold conversations, as we are doing now ?
Edit: “as we are doing now” above refers solely to “hold conversations”.
I’m not entirely sure what you mean by “Turing Test”. As far as I understand, the test is not multiple choice; instead, you just converse with the test subject as best you can, then make your judgement. And, since the only judgements you can make are “human” and “non-human”, the test doesn’t tell you how well the test subject can solve urban navigation problems or whatever; all it tells you is how good the subject is at being human.
The trick, though, is that in order to converse on a human level, the test subject would have to implement at least some form of AGI, because this is what humans do. This does not mean that the AI would be able to actually solve any problem in front of it, but that’s ok, because neither can humans. The Turing test is designed to identify human-level AIs, not Singularity-grade quasi-godlike uber-minds.
You dismiss “chattering” as some sort of a merely “linguistic” trick, but that’s just an implementation detail. Who cares whether the AI runs on biological wetware or a “narrow statistical machine” ? If I can hold an engaging conversation with it, I’m going to keep talking until I get tired, statistics or no statistics. I get to have interesting conversations rarely enough as it is...
That is the premise I’m questioning here. I’m not currently convinced that a super chatterbot needs to demonstrate general intelligence.
I understand what you’re saying, but I don’t understand why. I can come up with several different interpretations of your statement:
Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.
It’s possible for a chatterbot to appear generally intelligent without actually being generally intelligent.
You and I are talking about radically different things when we say “general intelligence”.
You and I are talking about radically different things when we say “chatting”.
To shed light on these points, here are some questions
Do you believe that a non-AGI chatterbot would be able to engage in a conversation with you that is very similar to the one you and I are having now ?
Admittedly, I am not all that intelligent and thus not a good test case. Do you believe that a non-AGI chatterbot could be built to emulate you personally, to the point where strangers talking with it on Less Wrong could not tell the difference between it and you ?
That is what I’m arguing may well be the case.
Ok, that gives me one reference point, let me see if I can narrow it down further:
Do you believe that humans are generally intelligent ? Do you believe that humans use their general intelligence in order to hold conversations, as we are doing now ?
Edit: “as we are doing now” above refers solely to “hold conversations”.
Actually, this seems surprisingly plausible, thinking about it. A lot of conversations are on something like autopilot.
But eventually even a human will need to think in order to continue.