I understand what you’re saying, but I don’t understand why. I can come up with several different interpretations of your statement:
Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.
It’s possible for a chatterbot to appear generally intelligent without actually being generally intelligent.
You and I are talking about radically different things when we say “general intelligence”.
You and I are talking about radically different things when we say “chatting”.
To shed light on these points, here are some questions
Do you believe that a non-AGI chatterbot would be able to engage in a conversation with you that is very similar to the one you and I are having now ?
Admittedly, I am not all that intelligent and thus not a good test case. Do you believe that a non-AGI chatterbot could be built to emulate you personally, to the point where strangers talking with it on Less Wrong could not tell the difference between it and you ?
Ok, that gives me one reference point, let me see if I can narrow it down further:
Do you believe that humans are generally intelligent ? Do you believe that humans use their general intelligence in order to hold conversations, as we are doing now ?
Edit: “as we are doing now” above refers solely to “hold conversations”.
That is the premise I’m questioning here. I’m not currently convinced that a super chatterbot needs to demonstrate general intelligence.
I understand what you’re saying, but I don’t understand why. I can come up with several different interpretations of your statement:
Regular humans do not need to utilize their general intelligence in order to chat, and thus neither does the AI.
It’s possible for a chatterbot to appear generally intelligent without actually being generally intelligent.
You and I are talking about radically different things when we say “general intelligence”.
You and I are talking about radically different things when we say “chatting”.
To shed light on these points, here are some questions
Do you believe that a non-AGI chatterbot would be able to engage in a conversation with you that is very similar to the one you and I are having now ?
Admittedly, I am not all that intelligent and thus not a good test case. Do you believe that a non-AGI chatterbot could be built to emulate you personally, to the point where strangers talking with it on Less Wrong could not tell the difference between it and you ?
That is what I’m arguing may well be the case.
Ok, that gives me one reference point, let me see if I can narrow it down further:
Do you believe that humans are generally intelligent ? Do you believe that humans use their general intelligence in order to hold conversations, as we are doing now ?
Edit: “as we are doing now” above refers solely to “hold conversations”.
Actually, this seems surprisingly plausible, thinking about it. A lot of conversations are on something like autopilot.
But eventually even a human will need to think in order to continue.