We show that deep neural networks are capable of generalizing from training data for which true labels are massively outnumbered by incorrect labels. We demonstrate remarkably high test performance after training on corrupted data from MNIST, CIFAR, and ImageNet. For example, on MNIST we obtain test accuracy above 90 percent even after each clean training example has been diluted with 100 randomly-labeled examples. Such behavior holds across multiple patterns of label noise, even when erroneous labels are biased towards confusing classes. We show that training in this regime requires a significant but manageable increase in dataset size that is related to the factor by which correct labels have been diluted. Finally, we provide an analysis of our results that shows how increasing noise decreases the effective batch size.
I think this suggests that, given data quality concerns in the corpus, we should look for contexts with low absolute numbers of good examples, or where the good completions are not a “qualitative plurality.” For example, if a subset of the data involves instruction finetuning, and there are 10 “personas” in the training data (e.g. an evil clown, a helpful assistant, and so on) -- as long as the helpful dialogues are a plurality, and are numerous “enough” in absolute terms, and the batch size is large enough—various forms of greedy sampling should still elicit helpful completions.
I wonder if the above is actually true in the LLM regime!
Another interesting result is that absolute number of correct labels matters a lot, not just the proportion thereof:
Furthermore, this should all be taken relative to the batch size. According to these experiments, in the limit of infinite batch size, greedy sampling will be good as long as the good training completions constitute a plurality of the training set.
Thoughts on “Deep Learning is Robust to Massive Label Noise.”
I think this suggests that, given data quality concerns in the corpus, we should look for contexts with low absolute numbers of good examples, or where the good completions are not a “qualitative plurality.” For example, if a subset of the data involves instruction finetuning, and there are 10 “personas” in the training data (e.g. an evil clown, a helpful assistant, and so on) -- as long as the helpful dialogues are a plurality, and are numerous “enough” in absolute terms, and the batch size is large enough—various forms of greedy sampling should still elicit helpful completions.
I wonder if the above is actually true in the LLM regime!
Another interesting result is that absolute number of correct labels matters a lot, not just the proportion thereof:
Furthermore, this should all be taken relative to the batch size. According to these experiments, in the limit of infinite batch size, greedy sampling will be good as long as the good training completions constitute a plurality of the training set.