OpenAI has been working on <@finetuning language models from human preferences@>(@Fine-Tuning GPT-2 from Human Preferences@). This blog post and paper show the progress they have made on text summarization in particular since their last release.
As a reminder, the basic setup is similar to that of [Deep RL from Human Preferences](https://deepmind.com/blog/learning-through-human-feedback/): we get candidate summaries by executing the policy, have humans compare which of two summaries is better, and use this feedback to train a reward model that can then be used to improve the policy. The main differences in this paper are:
1. They put in a lot of effort to ensure high data quality. Rather than having MTurk workers compare between summaries, they hire a few contractors who are paid a flat hourly rate, and they put a lot of effort into communicating what they care about to ensure high agreement between labelers and researchers.
2. Rather than collecting preferences in an online training setup, they collect large batches at a time, and run a relatively small number of iterations of alternating between training the reward model and training the policy. My understanding is that this primarily makes it simpler from a practical perspective, e.g. you can look at the large batch of data you collected from humans and analyze it as a unit.
3. They initialize the policy from a model that is first pretrained in an unsupervised manner (as in <@GPT-3@>(@Language Models are Few-Shot Learners@)) and then finetuned on the reference summaries using supervised learning.
On the Reddit task they train on, their summaries are preferred over the reference summaries (though since the reference summaries have varying quality, this does not imply that their model is superhuman). They also transfer the policy to summarize CNN / DailyMail news articles and find that it still outperforms the supervised model, despite not being trained at all for this setting (except inasmuch as the unsupervised pretraining step saw CNN / DailyMail articles).
An important ingredient to this success is that they ensure their policy doesn’t overoptimize the reward, by adding a term to the reward function that penalizes deviation from the supervised learning baseline. They show that if they put a very low weight on this term, the model overfits to the reward model and starts producing bad outputs.
Planned opinion:
This paper is a great look at what reward learning would look like at scale. The most salient takeaways for me were that data quality becomes very important and having very large models does not mean that the reward can now be optimized arbitrarily.
Planned summary for the Alignment Newsletter:
Planned opinion: