Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
Or do you have something else in mind?