A New Class of Glitch Tokens—BPE Subtoken Artifacts (BSA)
Introduction
I’ve spent the last few days going through every glitch token listed in the third SolidGoldMagikarp glitch token post, and was able to find the cause/source of almost every single one. This is the first in a series of posts in which I will explain the behaviors of these glitch tokens, the context in which they appear in the training data, and what they reveal about the internal dynamics of LLMs. If you’re unfamiliar with glitch tokens, I highly suggest you read the glitch token archaeology posts first.
My process involved searching through OpenWebText, a recreation of GPT2 training data, and prompting GPT2 to locate the context of the training data.
Previous Context
In their 2023 post, the authors made the following pronouncement in regard to the glitch token ′ practition’ (token_id: 17629). I took it as a personal challenge.
The first thing I found was that ′ practitioner’ (32110) and ′ practitioners’ (24068) were both already tokens in the GPT2 tokenizer. Furthermore, all three tokens also present in the GPT3.5/4 and GPT4o tokenizers, meaning they weren’t an artifact of GPT2′s training data!
There were only 13 examples of ” practition” in OpenWebText.
['urlsf_subset00-928_data.xz', '\nThe WCC is also a prominent supporter and practitioning body for Peace journalism: journalism practice that'], ['urlsf_subset01-45_data.xz', ' what is now the Czech Republic, became a noted practitioneer of lagers, and geology again']
['urlsf_subset06-66_data.xz', 'The site relaunched on Wordpress VIP, practitioned by ESPN and 10up, a company']
['urlsf_subset07-802_data.xz', ' one possible explanation for the positive association between BDSM practitioning and subjective well-being.”'], ['urlsf_subset09-305_data.xz' , ' that vital personal touch that feudalism asked of its practitioning monarchs. He made wild leaps of']
['urlsf_subset11-474_data.xz', ' of the Senators asked “…if a health practitionerr is advising a patient to go on a']
['urlsf_subset11-809_data.xz', ' the full range of Buddhist traditions and the diversity of practition-ers today. As Buddhism in the West']
['urlsf_subset12-398_data.xz', ' one possible explanation for the positive association between BDSM practitioning and subjective well‐being. Several limitations']
['urlsf_subset13-921_data.xz', ' availa ble.\n\nSecondly , LCA practitione rs need to comply with the ISO\n']
['urlsf_subset15-258_data.xz', 'en K, et al. Investigation of mindfulness meditation practitiones with voxel-based morphometry']
['urlsf_subset15-797_data.xz', ' size effects in statistical pattern recognition: Recommendations for practitioneers. - IEEE Transactions on Pattern Analysis and'
['urlsf_subset19-877_data.xz', '\n\nStarting at level 6 , as a mask practitionner , you dive deeper into your ways and']
['urlsf_subset20-165_data.xz', ' It’s tought in university, probably practitioned in most of the development shops out there']
They were mostly misspellings, as an element of ” practitioning”, or line breaks artifacts[1].
Experimentation
I examined some other low-frequency tokens in GPT2 and found a few which were substrings of a higher-frequency counterpart. ‘ortunately’ (4690) also behaved like a glitch token, while higher-frequency subtokens like ′ volunte’ (7105) didn’t.
tokenId | tokenString | tokenFrequency
17629 ' practition' 13
32110 ' practitioner' 9942
24068 ' practitioners' 14646
4690 'ortunately' 14
6668 'fortunately' 4329
39955 ' fortunately' 10768
31276 'Fortunately' 15667
7105 ' volunte' 34
41434 ' volunteering' 10598
32730 ' volunteered' 14176
13904 ' volunteer' 20037
11661 ' volunteers' 20284
6598 ' behavi' 65
46571 'behavior' 7295
41672 ' behavioural' 7724
38975 ' behaviours' 9416
37722 ' behaving' 12645
17211 ' behavioral' 16533
14301 ' behaviors' 18709
9172 ' behaviour' 20497
4069 ' behavior' 20609
Helpful Contributions
Others pointed out that this was a result of Byte-Pair Encoding, which builds tokens out of shorter, previously encounter, tokens.
I was very surprised by this, since glitch behavior implies a low frequency in the training data, and identifying and removing them from the tokenizer takes little effort. Gwern thinks the researchers are just that lazy. My overall impression is that glitch tokens are a useful tool to help prevent catastrophic forgetting, but that’s for a future post. Even then, I’m doubtful that incorporating low-frequency BSA tokens can improve performance. Maybe they contribute to the spelling miracle in some poorly understood way?
Applications
I’m not sure if this approach is useful for finding glitch tokens in other GPT models. Due to things like misspellings, line breaks and uncommon variants, even tokens rare enough to trigger glitch behavior in GPT2 are likely pushed over the glitchiness threshold in GPT4 and GPT4o.
However, the differences in token ids of substrings can be used to help identify the potential glitchiness of a token with no other knowledge of the model or training data[2]. We know from the mechanisms of BPE encoding that smaller subtokens created before they are merged into larger tokens. However, if a subtoken is never/rarely encountered outside its parent token, we would expect it to have an index close to the parent token, and that’s exactly what we observe for many glitch tokens!
For example:
42066 Nitrome 8
42089 TheNitrome 0
42090 TheNitromeFan 0
Note how close the index of ” TheNitrome” (42089) is to ” TheNitromeFan”[3] (42090). This indicates 2 things:
The distribution of these tokens are highly uneven, a tell for glitch tokens[4].
The subtokens were relatively rarely seen independently—they’re infrequent outside the larger tokens[5], meaning that they will rarely be encountered in the training data due to the greedy tokenizer[6].
Many of the most well-know glitch tokens exhibit this pattern:
42202 GoldMagikarp 0
43453 SolidGoldMagikarp 0
36481 ertodd 125
37444 petertodd 29
40219 oreAnd 3
40240 oreAndOnline 0
40241 InstoreAndOnline 0
40242 BuyableInstoreAndOnline 1
The token ids suggest “oreAndOnline
” and “InstoreAndOnline
” virtually never appears apart from “BuyableInstoreAndOnline
”. However, the small gap between “oreAnd” and the other tokens implies that “oreAnd” was occasionally present in the data as something else—and that’s exactly what we observe! It appears as parts of various functions, generally in something like “IgnoreAnd...”[7].
['urlsf_subset03-643_data.xz', "voltage__get_by_counts_value() whatsoever. That's what …_IgnoreAndReturn() functions are for (generated by CMock): void test_voltage_get ("
'56. adc_handler__voltage__get_by_counts_value_IgnoreAndReturn ( 456 ) ; //-- actually call the function being tested, that should perform //'
'ADC_CH_CNT ; channel ++ ) { adc_handler__ctor_IgnoreAndReturn ( ADC_HANDLER_RES__OK ) ; } //-- before each test']
['urlsf_subset05-264_data.xz', ' guts and gloryI\'m tryin to hear B. I. G. And some cuts from NoreAnd you keep talkin over the beat like Clue ("Do you remember?").. Go find']
['urlsf_subset11-699_data.xz', 'ax push eax call dword ptr[ebp + aShellExecuteA] // RestoreAndExit mov esp, [ebp + SaveESP] popfd popad jmp d']
Nearby (by token_id) tokens can also be used to infer the context of hard-to-locate tokens. For example, “natureconservancy” (41380) and “assetsadobe” (41382) have close token_ids. “assetadobe” also continues as something like
"assetsadobe.com/is/image/content/dam/tnc/nature/en/photos″
”assetsadobe.com/is/image/content/dam/tnc/nature/en/photos/tnc_92425982.jpg?crop=0,0,5120,3740&wid=800&hei=600″
′assetsadobe.com/is/image/content/dam/tnc/nature/en/photos/tnc_91708743_4000x2200.jpg?crop=900,0,2200,2200&wid=150&hei′
~100% of the time. It looks like both tokens were part of nature.org scrapes.
Similarly, “GoldMagikarp” (42202) and “TheNitromeFan” (42090) are around Reddit political discussion and numerical tokens like
['42224', '411'],['42215', ' 412'], ['42277', ' fundamentalist'],
['42318', ' Racial'], ['42315', 'Mario']
This suggests it’s part of a Reddit scrape. When prompted with counting subreddit names, the continuation will assume it’s a Reddit response. Note the “\n\n” (628), often times referred to as “Reddit spacing” due to its frequency on that site.
TheNitromeFan\n\nI am not an expert on the history of this subreddit but I have read the
TheNitromeFan\n\nI'm a huge fan of the "I don't care" meme. I
SolidGoldMagikarp\n\nThe first time I heard of the concept of an "alt-right" was
SolidGoldMagikarp\n\nI have a few friends who love to watch porn, and I have a few
Interestingly, ” SolidGoldMagikarp” (43453) occurs quite a bit away from “GoldMagikarp” (42202). My theory for this is that Reddit posts are often in the following format:
u/redditusername
redditusername
•
2d ago
<actual post>
and “u/SolidGoldMagikarp” tokenizes as:
[[84, 14, 46933, 42202], ['u', '/', 'Solid', 'GoldMagikarp']]
It’s possible only enough of his post were included to get “GoldMagikarp” as a token for a while. This was corrected after more of his posts were read later, and the full ” SolidGoldMagikarp” token was included in the final vocabulary[8].
My next post will be about why an earlier hypothesis that many glitch tokens were excluded from training data is unlikely to be true for most glitch tokens, and the role that glitch tokens play in improving LLM performance.
If you’re curious about a particular glitch token, feel free to comment.
Addendum
There are two tokens I still don’t have a source for. Unlike almost every other glitch token, they exhibit spontaneous glitch behavior, even when they are the only prompt.
25658 '?????-' 3
31666 '?????-?????-' 0
Completions for ”?????-”:
?????-
The first thing that struck me was how many people were there. It's a
?????-;. (2)
A person is guilty of a crime, punishable by imprisonment
?????-;)<(;;<;;'<;;'<;;'<;;'<
?????-;
-;-;-;-;-;-;-;-
?????-;)
I'm not sure I'm going to be as successful with this as
?????-
"I don't know if I should be proud or sad."
"
?????-
"What's wrong?" I asked him, as he was sitting on a bench
?????-
-
-
-
-
-
-
?????-;. -. . - -. . -
?????-
-
-
-
-
-
-
?????-;)
I'm going to be very, very brief.
I don
?????-
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
?????-
"You're not the one I was hoping to see!"
- ^
″ practitionerr” is tokenized into:
[[17629, 8056], [′ practition’, ‘err’]]
instead of [[32110, 81], [′ practitioner’, ‘r’]].
There’s something I don’t understand here—I thought BPE tokenization was maximally greedy! Or does that only apply to tokenizer training, and something else happens when a tokenizer is ran on text?
- ^
I can probably find glitch tokens in something the size of GPT2 just with access to the tokenizer.
- ^
“Nitrome” is also the name of a video game developer, so it appeared in other places in the dataset, and thus was tokenized slightly sooner.
- ^
for example, literally all instances of “ÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÔ (token_id: 35496)(that’s 64 characters) in OpenWebText are from archive.org. There are 9468 instances in total, not counting shorter variants with 1, 2, 4, 8, 16, and 32 characters. The following was the first example I found.
This goes on for 200,000 characters on this page alone.
@Rana Dexsin wrote the single best answer I’ve ever seen on the internet.
Which links to this.
There were no hidden characters in these sequences.
- ^
The second one isn’t always true; many glitch tokens exhibiting such patterns may be from alternate tokenizations. For example, “EStream” almost always continues as “EStreamControl...”, distinct from “EStreamFrame”. Both “EStreamFrame” and “EStream” are part of many functions in the enums submodule of the Python Stream library.
- ^
If “
oreAndOnline
” is only present as part of “BuyableInstoreAndOnline
”, the model will never see the “oreAndOnline
” token. - ^
For example, the CMock function “_IgnoreAndReturn()”. CMock is used to measure the performance of C++ functions, and it makes sense that some CMock logs were included in the training data.
- ^
I do find it interesting that I haven’t been able to induce counting behavior in GPT2. This may be related to so many multi-digit numbers having their own tokenizations—“411” (42224), “412″(42215), and so on, and that’s just not a pattern GPT2 can figure out. It’s possible that the r/Counting subreddit is a major contributor in why GPT2 is so bad at math!
I then looked around at GPT4 and GPT4o tokenizers. To my shock and horror, they still include multi-digit numbers as tokens. “693” (48779) being an example in GPT4o.
This is… what? Why? How can this possibly be a good idea in an LLM used for math and coding applications? It would certainly explain why GPT4/GPT4o is so bad at math.
Wow, just wow. Sure seems like Gwern has it spot on here that OpenAI engineers are being rushed and sloppy about this.
Kinda pisses me off, considering that I applied to work for OpenAI between GPT-2 and GPT-3, and this isn’t the sort of mistake I would make. Yet, they rejected my application. (note: given the current state of OpenAI, I wouldn’t apply today!)
Building my own small LLMs around that time for practice, the low frequency tokens and the tokenization of digits were among the first things I checked!
I tried a quick search for Anthropic to see if they are doing the same nonsense. Found this site: https://lunary.ai/anthropic-tokenizer And this https://github.com/javirandor/anthropic-tokenizer
Lunary shows Anthropic as not only tokenizing groups of digits together, but also sometimes digits with a space in between!? Is that true? I’m flabbergasted if so. I’m going to look into this more.
[Edit: some people have suggested to me that they think Lunary is wrong about Anthropic’s tokenization, and that Anthropic never had or has fixed this problem. Hopefully that’s true. Still pretty surprising that OpenAI is still suffering from it!]
You’re really not going to like the fact that the GPT4o tokenizer has every single number below 1000 tokenized. It’s not a hand-crafted feature, since the token_ids are all over the place. I think they had to manually remove larger number tokens (there are none above 999).
I feel like I need a disclaimer like the South Park episode. This is what is actually inside the tokenizer.
[’37779′, ‘740’],
[‘47572’, ‘741’],
[‘48725’, ‘742’],
[‘49191’, ‘743’],
[‘46240’, ‘744’],
[‘44839’, ‘745’],
[‘47433’, ‘746’],
[‘42870’, ‘747’],
[‘39478’, ‘748’],
[‘44712’, ‘749’],
They also have plenty of full width numbers (generally only used in Chinese and Japanese to not mess with spacing) and numbers in other languages in there.
[’14334′, ‘十’],
[‘96681’, ‘十一’],
[‘118633’, ‘十三’],
[‘138884’, ‘十九’],
[‘95270’, ‘十二’],
[‘119007’, ‘十五’],
[‘107205’, ‘十八’],
[‘180481’, ‘十四’]
[‘42624’, ‘零’],
[‘14053’, ‘0’],
[‘49300’, ‘00’],
[‘10888’, ‘1’],
[‘64980’, ‘10’],
[‘141681’, ‘100’],
[‘113512’, ‘11’],
[‘101137’, ‘12’],
[‘123326’, ‘13’],
[‘172589’, ‘14’],
[‘126115’, ‘15’],
[‘171221’, ‘16’]
Maybe they use a different tokenizer for math problems? Maybe the multi-digit number tokenizers are only used in places where there are a lot of id numbers? Nope. Looks like they were just raw-dogging it. If anyone is wondering why GPTs are so bad at basic multiplication, this is why.
Colin Fraser on X: “Here’s a similar experiment I just tried. The fact that this works even a little bit completely blows my mind and confuses me greatly. If you asked me if this would work at all I would say definitely not. https://t.co/E4knpf7JoZ” / X
If you’ve ever wondered “wow, why is GPT4o specifically better at math when the number of digits is divisible by 3?”, wonder no more. It’s the tokenizer. Again.
Ironically, this is the opposite of the well-known calculating prodigy Shakuntala Devi, who said that inserting commas (e.g. “5,271,009”) interfered with her perception of the number.
Interesting that GPT4o is so bad at math and tokenizes large numbers like this. I wonder if adding commas would improve performance?
🤢 But whyyyyyyyy?!
There is likely a reason for this—if you feed in numbers you found on the internet into a LLM digit by digit, it’s going to destroy the embeddings of those numbers. A lot of things found in scrapes are just… extremely long sequences of numbers. The tradeoff may be numeracy (can do basic multiplication) vs natural language performance (won’t start spitting out Minecraft debug logs in the middle of conversation).
Right, but… it’s a combined issue of tokenization and data cleaning. I agree you can’t properly do one without the other. I just think you should do both!
Clearly the root cause is the messy data, and once you’ve cleaned the data it should be trivial to fix the tokenizer.
I checked Lunary’s predicted tokenization for Gemma, Mistral, and Grok as well. These three models all seem to handle number tokenization cleanly, a single digit at a time. This suggests that there isn’t some well considered valid reason that digits need uneven lumpy tokenization, that it’s really just a mistake.