Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
Before reading: Can probably get around via dynamic templates...; augmentations to curriculum learning… training models to not need chain of thought via token-level distillation/”thought compression”; using model-critiqued (?) oversight / revision of artificial outputs. Data augmentation seems quite possible, and bagging seems plausible. Plus can pretrain for ~4 epochs with negligible degradation compared to having 4x unique tokens; having more epochs need not be disastrous.
After reading: This paper examines a dumb model (OPT-125m) and finetunes it on (mostly/exclusively) its own outputs, which are going to be lower-quality than the wikitext2 content. In contrast, I think GPT-4 outputs are often more coherent/intelligent than random internet text. So I think it remains an open question what happens if you use a more intelligent bagging-based scheme / other training signals.
Overall, I like the broad question but not impressed with their execution. Their theoretical analysis suggests shrinking tails, while their OPT results claim to have growing tails (but I don’t see it, looking at the figures).
EDIT: Removed a few specifics which aren’t as relevant for alignment implications.
Thanks for sharing, I was planning on reading this paper too. My guess coming in was that the results would not hold up with scale, and for many of the reasons you mentioned. Kind of disappointed they didn’t mention in the abstract that they used OPT-125m.
Thoughts on “The Curse of Recursion: Training on Generated Data Makes Models Forget.” I think this asks an important question about data scaling requirements: what happens if we use model-generated data to train other models? This should inform timelines and capability projections.
Abstract:
Before reading: Can probably get around via dynamic templates...; augmentations to curriculum learning… training models to not need chain of thought via token-level distillation/”thought compression”; using model-critiqued (?) oversight / revision of artificial outputs. Data augmentation seems quite possible, and bagging seems plausible. Plus can pretrain for ~4 epochs with negligible degradation compared to having 4x unique tokens; having more epochs need not be disastrous.
After reading: This paper examines a dumb model (OPT-125m) and finetunes it on (mostly/exclusively) its own outputs, which are going to be lower-quality than the wikitext2 content. In contrast, I think GPT-4 outputs are often more coherent/intelligent than random internet text. So I think it remains an open question what happens if you use a more intelligent bagging-based scheme / other training signals.
Overall, I like the broad question but not impressed with their execution. Their theoretical analysis suggests shrinking tails, while their OPT results claim to have growing tails (but I don’t see it, looking at the figures).
EDIT: Removed a few specifics which aren’t as relevant for alignment implications.
Thanks for sharing, I was planning on reading this paper too. My guess coming in was that the results would not hold up with scale, and for many of the reasons you mentioned. Kind of disappointed they didn’t mention in the abstract that they used OPT-125m.