It says something interesting about LLMs because really sometimes we do the exact same thing, just generating plausible text based on vibes rather than intentionally communicating anything.
The “sometimes” bit here is key. It’s my impression that people who insist that “people are just like LLMs” are basically telling you that they spend most/all of their time in conversations that are on autopilot, rather than ones where someone actually means or intends something.
Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.
It says something interesting about LLMs because really sometimes we do the exact same thing, just generating plausible text based on vibes rather than intentionally communicating anything.
The “sometimes” bit here is key. It’s my impression that people who insist that “people are just like LLMs” are basically telling you that they spend most/all of their time in conversations that are on autopilot, rather than ones where someone actually means or intends something.
Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.