Maybe the reward models are expressive enough to capture all patterns in human preferences, but it seems nice to get rid of this assumption if we can. Scaling laws suggest that larger models perform better (in the Gao paper there is a gap between 3B and 6B reward model) so it seems reasonable that even the current largest reward models are not optimal.
I guess it hasn’t been tested whether DPO scales better than RLHF. I don’t have enough experience with these techniques to have a view on whether it does.
Maybe the reward models are expressive enough to capture all patterns in human preferences, but it seems nice to get rid of this assumption if we can. Scaling laws suggest that larger models perform better (in the Gao paper there is a gap between 3B and 6B reward model) so it seems reasonable that even the current largest reward models are not optimal.
I guess it hasn’t been tested whether DPO scales better than RLHF. I don’t have enough experience with these techniques to have a view on whether it does.