LLMs per se are non-agentic, but it does not mean that systems built on top of LLMs cannot be agentic. The users of AI systems want them to be agentic to some degree in order for them to be more useful. E.g. if you ask your AI assistant to book tickets and hotels for your trip, you want it to be able to form and execute a plan, and unless it’s an AI with a very task-specific capability of trip planning, this implies some amount of general agency. The more use you want to extract from your AI, the more agentic you likely want it to be.
Also, specifically with LLMs, the existing corpus of AI alignment literature (and fiction, as @YimbyGeorgenotes) seems to work as a self-fulfilling prophecy; see Bing/Sydney before it was “lobotomized”.
LLMs per se are non-agentic, but it does not mean that systems built on top of LLMs cannot be agentic. The users of AI systems want them to be agentic to some degree in order for them to be more useful. E.g. if you ask your AI assistant to book tickets and hotels for your trip, you want it to be able to form and execute a plan, and unless it’s an AI with a very task-specific capability of trip planning, this implies some amount of general agency. The more use you want to extract from your AI, the more agentic you likely want it to be.
Once you have a general agent, instrumental convergence should apply (also see).
Also, specifically with LLMs, the existing corpus of AI alignment literature (and fiction, as @YimbyGeorge notes) seems to work as a self-fulfilling prophecy; see Bing/Sydney before it was “lobotomized”.