The point is not “not understanding sometimes”, the point is not understanding in a sense of inability to generate responses having no close analogs in the training sets. ChatGPT is very good at finding the closest example and fitting it into output text. What it cannot do—obviously—is to combine two things it can answer satisfactory separately and combine them into a coherent answer to a question requiring two steps (unless it has seen an analog of this two-step answer already).
This shows the complete lack of usable semantic encoding—which is the core of the original Searle’s argument.
You’re still arguing with reference to what ChatGPT can or cannot do as far as producing responses to questions—that it cannot produce “a coherent answer to a question requiring two steps”. But the claim of the Chinese Room argument is that even if it could do that, and could do everything else you think it ought to be able to do, it still would not actually understand anything. We have had programs that produce text but clearly don’t understand many things for decades. That ChatGPT is another such program has no implications for whether or not the Chinese Room argument is correct. If at some point we conclude that it is just not possible to write a program that behaves in all respects as if it understands, that wouldn’t so much refute or support the Chinese Room argument, as simply render it pointless, since its premise cannot possible hold.
The point is not “not understanding sometimes”, the point is not understanding in a sense of inability to generate responses having no close analogs in the training sets. ChatGPT is very good at finding the closest example and fitting it into output text. What it cannot do—obviously—is to combine two things it can answer satisfactory separately and combine them into a coherent answer to a question requiring two steps (unless it has seen an analog of this two-step answer already).
This shows the complete lack of usable semantic encoding—which is the core of the original Searle’s argument.
You’re still arguing with reference to what ChatGPT can or cannot do as far as producing responses to questions—that it cannot produce “a coherent answer to a question requiring two steps”. But the claim of the Chinese Room argument is that even if it could do that, and could do everything else you think it ought to be able to do, it still would not actually understand anything. We have had programs that produce text but clearly don’t understand many things for decades. That ChatGPT is another such program has no implications for whether or not the Chinese Room argument is correct. If at some point we conclude that it is just not possible to write a program that behaves in all respects as if it understands, that wouldn’t so much refute or support the Chinese Room argument, as simply render it pointless, since its premise cannot possible hold.