Usually I’d agree about LLMs. However, LLMs complain about getting confused if you let them freewheel and vary the temperature—I’m pretty sure that one is real and probably has true mechanistic grounding, because even at training time, noisiness in the context window is a very detectable and bindable pattern.
Usually I’d agree about LLMs. However, LLMs complain about getting confused if you let them freewheel and vary the temperature—I’m pretty sure that one is real and probably has true mechanistic grounding, because even at training time, noisiness in the context window is a very detectable and bindable pattern.