I wouldn’t say that my experience with ChatGPT is in total agreement with your conclusion yet you’re raising a good point and the distinction is helpful. I remember of conversations in which the chatbot would both acknowledge and challenge my viewpoint, which I must admit is quite appreciated and not systematic in the biological realm. On the other hand, indeed it is common that pushing the chatbot to buy my arguments and adopt my stance be fairly easy.
Somehow it’s very related to humanlike intelligence; that is, when training an LLM-based chatbot[1] by reinforcement, the positive (rewarding) feedback comes from both confirmation of the interlocutor’s beliefs and matters like veracity, ethics, … It’s also what we humans have been experiencing.
Why and how does it rise to a whole new level when it comes to AI? I tend to think that we must understand the technologies we are using, so it’s our responsibility to use chatbots properly and leverage their capabilities. When talking with a child, or a yound student, or generally someone you know is a newcomer, we adapt our questions, arguments, and the way we process their responses. It’s not an exact science for sure, but there’s no reason to expect so with chatbots.
- ^
It seems more accurate than LLMs as those have not yet been trained to have a chat with you
Relatable.
Giorgio Parisi mentionned this in his book; he said that the ah-ah moments tend to spark randomly when doing something else. Bertrand Russell had a very active social life (he praised leisure) and believed it is an active form of idleness that could reveal very productive. A good balance might be the best way to leverage it.