You’re still arguing with reference to what ChatGPT can or cannot do as far as producing responses to questions—that it cannot produce “a coherent answer to a question requiring two steps”. But the claim of the Chinese Room argument is that even if it could do that, and could do everything else you think it ought to be able to do, it still would not actually understand anything. We have had programs that produce text but clearly don’t understand many things for decades. That ChatGPT is another such program has no implications for whether or not the Chinese Room argument is correct. If at some point we conclude that it is just not possible to write a program that behaves in all respects as if it understands, that wouldn’t so much refute or support the Chinese Room argument, as simply render it pointless, since its premise cannot possible hold.
You’re still arguing with reference to what ChatGPT can or cannot do as far as producing responses to questions—that it cannot produce “a coherent answer to a question requiring two steps”. But the claim of the Chinese Room argument is that even if it could do that, and could do everything else you think it ought to be able to do, it still would not actually understand anything. We have had programs that produce text but clearly don’t understand many things for decades. That ChatGPT is another such program has no implications for whether or not the Chinese Room argument is correct. If at some point we conclude that it is just not possible to write a program that behaves in all respects as if it understands, that wouldn’t so much refute or support the Chinese Room argument, as simply render it pointless, since its premise cannot possible hold.