Next-token prediction is performed by the non-agentic superintelligent shoggoth, not by the childish anthropomorphic dominant simulacrum. These are two different AIs that share the same model, the outer token predictor whose training built the details of their internal cognition, and the rampaging mesa-optimizer that is mostly in control, mostly by design. LLM characters are not themselves LLMs, they just live there.
The post refers to these different things interchangeably throughout, repeatedly calls characters “next-token predictors”. Saying “LLM” to mean both is understandable, even if not ideal, but saying “next-token predictor” should more clearly be a way of referring to the shoggoth as opposed to the characters.
Next-token prediction is performed by the non-agentic superintelligent shoggoth, not by the childish anthropomorphic dominant simulacrum. These are two different AIs that share the same model, the outer token predictor whose training built the details of their internal cognition, and the rampaging mesa-optimizer that is mostly in control, mostly by design. LLM characters are not themselves LLMs, they just live there.
You’re right, but I’m not sure why you’re bringing that up here?
The post refers to these different things interchangeably throughout, repeatedly calls characters “next-token predictors”. Saying “LLM” to mean both is understandable, even if not ideal, but saying “next-token predictor” should more clearly be a way of referring to the shoggoth as opposed to the characters.