A lot of them do look like that, but we’ve dug deep to find their true origins, and it’s all pretty random and diffuse. See Part III (https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology). Bear in mind that when GPT-3 is given a token like “EStreamFrame”, it doesn’t “see” what’s “inside” like we do ([“E”, “S”, “t”, “r”, “e”, “a”, “m”, “F”, “r”, “a”, “m”, “e”]). It receives it as a kind of atomic unit of language with no internal structure. Anything it “learns about” this token in training is based on where it sees it used, and it’s looking like most of these glitch tokens correspond to strings seen very infrequently in the training data (but which for some reason got into the tokenisation dataset in large numbers, probably via junk files like mangled text dumps from gaming logs, etc.).
A lot of them do look like that, but we’ve dug deep to find their true origins, and it’s all pretty random and diffuse. See Part III (https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology). Bear in mind that when GPT-3 is given a token like “EStreamFrame”, it doesn’t “see” what’s “inside” like we do ([“E”, “S”, “t”, “r”, “e”, “a”, “m”, “F”, “r”, “a”, “m”, “e”]). It receives it as a kind of atomic unit of language with no internal structure. Anything it “learns about” this token in training is based on where it sees it used, and it’s looking like most of these glitch tokens correspond to strings seen very infrequently in the training data (but which for some reason got into the tokenisation dataset in large numbers, probably via junk files like mangled text dumps from gaming logs, etc.).