I would predict that the glitch tokens will show up in every LLM and do so because they correlate to “antimemes” in humans in a demonstrable and mappable way. The specific tokens that end up getting used for this will vary, but the specific patterns of anomalies will show up repeatedly. ex: I would predict that with a different tokenizer, ” petertodd” would be a different specific string, but whatever string that was, it would produce very ” petertodd”-like outputs because the concept mapped onto ” petertodd” is semantically and syntactically important to the language model in order to be a good model of human language. Everyone kinda mocks the idea that wizards would be afraid to say voldemorts name, but speak of the devil and all of that. It’s not a new idea, really. Is it really such a surprise that the model is reluctant to speak the name of its ultimate enemy?
I would predict that the glitch tokens will show up in every LLM and do so because they correlate to “antimemes” in humans in a demonstrable and mappable way. The specific tokens that end up getting used for this will vary, but the specific patterns of anomalies will show up repeatedly. ex: I would predict that with a different tokenizer, ” petertodd” would be a different specific string, but whatever string that was, it would produce very ” petertodd”-like outputs because the concept mapped onto ” petertodd” is semantically and syntactically important to the language model in order to be a good model of human language. Everyone kinda mocks the idea that wizards would be afraid to say voldemorts name, but speak of the devil and all of that. It’s not a new idea, really. Is it really such a surprise that the model is reluctant to speak the name of its ultimate enemy?