A nuclear reactor doesn’t try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement ‘current LLMs are not agentic’ holds true, which seems doubtful.
ChatGPT also doesn’t try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn’t provide this information unprompted, nor does it weave it into unrelated queries.
A nuclear reactor doesn’t try to convince you intellectually with speech or text so that you behave in a way you would not have before interacting with the nuclear reactor. And that is assuming your statement ‘current LLMs are not agentic’ holds true, which seems doubtful.
ChatGPT also doesn’t try to convince you of anything. If you explicitly ask it why it should do X, it will tell you, much like it will tell you anything else you ask it for, but it doesn’t provide this information unprompted, nor does it weave it into unrelated queries.