Filtering for difficulty like that is tricky. In particular the most difficult samples are random noise or Chinese or something that the model can’t begin to comprehend.
Some approaches I would consider:
Curriculum learning—Have a bunch of checkpoints from a smaller GPT. Say the big GPT currently has a LM loss of 3. Then show it the examples where the smaller GPT’s loss improved most rapidly when its average loss was 3.
Quality—Put more effort into filtering out garbage and upsampling high quality corpuses like Wikipedia.
Retrieval—Let the model look things up when its confused, like MARGE from Pretraining via Paraphrasing does.
some of the Chinese food samples looked nauseating to me