Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.
Oh, sure. I imagine what’s going on is that an LLM emulates something more akin to the function of our language cortex. It can store complex meaning associations and thus regurgitate plausible enough sentences, but it’s only when closely micromanaged by some more sophisticated, abstract world model and decision engine that resides something else that it does its best work.