Is there a primer on what the difference between training LLMs and doing RLHF on those LLMs post-training is? They both seem fundamentally to be doing the same thing: move the weights in the direction that increases the likelihood that they output the given text. But I gather that there are some fundamental differences in how this is done and RLHF isn’t quite a second training round done on hand-curated datapoints.
Is there a primer on what the difference between training LLMs and doing RLHF on those LLMs post-training is? They both seem fundamentally to be doing the same thing: move the weights in the direction that increases the likelihood that they output the given text. But I gather that there are some fundamental differences in how this is done and RLHF isn’t quite a second training round done on hand-curated datapoints.
Some links I think do a good job:
https://huggingface.co/blog/rlhf
https://openai.com/research/instruction-following
Thank you. I was completely missing that they used a second ‘preference’ model to score outputs for the RL. I’m surprised that works!