I imagine this also has a lot to do with the incentives of the big LLM companies. It seems very possible to fix this if a firm really wanted to, but this doesn’t seem like the kind of thing that would upset many users often (and I assume that leaning on the PC side is generally a safe move).
I think that the current LLMs have pretty mediocre epistemics, but most of that is just the companies playing safe and not caring that much about this.
Sure, but the fact that a “fix” would even be necessary highlights that RLHF is too brittle relative to slightly OOD thought experiments, in the sense that RLHF misgeneralizes the actual human preference data it was given during training. This could either be a case of misalignment between human preference data and reward model, or between reward model and language model. (Unlike SFT, RLHF involves a separate reward model as “middle man”, because reinforcement learning is too sample-inefficient to work with a limited number of human preference data directly.)
I imagine this also has a lot to do with the incentives of the big LLM companies. It seems very possible to fix this if a firm really wanted to, but this doesn’t seem like the kind of thing that would upset many users often (and I assume that leaning on the PC side is generally a safe move).
I think that the current LLMs have pretty mediocre epistemics, but most of that is just the companies playing safe and not caring that much about this.
Sure, but the fact that a “fix” would even be necessary highlights that RLHF is too brittle relative to slightly OOD thought experiments, in the sense that RLHF misgeneralizes the actual human preference data it was given during training. This could either be a case of misalignment between human preference data and reward model, or between reward model and language model. (Unlike SFT, RLHF involves a separate reward model as “middle man”, because reinforcement learning is too sample-inefficient to work with a limited number of human preference data directly.)