“Policies that look good to the evaluator in the training setting” are not the same as “Policies that it would be safe to unboundedly pursue”, so under an outer/inner decomposition, it isn’t at all outer aligned.
Even if you don’t buy the outer/inner decomposition, RLHF is a more limited way of specifying intended behavior+cognition than what we would want, ideally. (Your ability to specify what you want depends on the behavior you can coax out of the model.)
RLHF implementations to date don’t actually use real human feedback as the reward signal, because human feedback does not scale easily. They instead try to approximate it with learned models. But those approximations are really lossy ones, with lots of errors that even simple optimization processes like SGD can exploit, causing the result to diverge pretty severely from the result you would have gotten if you had used real human feedback.
I don’t personally think that RLHF is unsalvageably bad, just a far cry from what we need, by itself.
Yes, it’s true that RLHF breaks if you apply an arbitrary optimization pressure, and that’s why you have to put a KL divergence, and that this KL divergence is difficult to calibrate.
I don’t understand why the outer/inner decomposition is relevant in my question. I am only talking about the outer.
Point 3 is wrong, because in the instructGPT paper, GPT with RLHF is more rated than the fine-tuned GPT.
Some major issues include:
“Policies that look good to the evaluator in the training setting” are not the same as “Policies that it would be safe to unboundedly pursue”, so under an outer/inner decomposition, it isn’t at all outer aligned.
Even if you don’t buy the outer/inner decomposition, RLHF is a more limited way of specifying intended behavior+cognition than what we would want, ideally. (Your ability to specify what you want depends on the behavior you can coax out of the model.)
RLHF implementations to date don’t actually use real human feedback as the reward signal, because human feedback does not scale easily. They instead try to approximate it with learned models. But those approximations are really lossy ones, with lots of errors that even simple optimization processes like SGD can exploit, causing the result to diverge pretty severely from the result you would have gotten if you had used real human feedback.
I don’t personally think that RLHF is unsalvageably bad, just a far cry from what we need, by itself.
Yes, it’s true that RLHF breaks if you apply an arbitrary optimization pressure, and that’s why you have to put a KL divergence, and that this KL divergence is difficult to calibrate.
I don’t understand why the outer/inner decomposition is relevant in my question. I am only talking about the outer.
Point 3 is wrong, because in the instructGPT paper, GPT with RLHF is more rated than the fine-tuned GPT.