I think we’re talking past each other here. Some subtle points I should have been more clear on:
-This approach to gradient hacking doesn’t affect the RLHF loss at all. (The gradient hacking is only initiated after we’ve solved our tasks, and in samples where the reward won’t be affected by additional text) -PPO RLHF training is weird due to the circularity involved where the model is used to generate its own training samples; in this way RL is not like pre-training; and consequently you can get self-reinforcing phenomena out of it like mode collapse. I think my proposal is an example in that vein. -This kind of gradient hacking also could be attempted in plain sight by a helpful assistant that just wants to learn to be even more helpful!
To state the main idea a different way:
The suggestion is that the network could abuse the fact that RLHF is happening to “ride along” with the update, using it to train some unrelated behavior of its choice.
The way it would hypothetically do this, is by figuring out which way the gradient is going to hit, positive or negative for increasing the likelihood of the sample (let’s say by waiting til it is confident that its current sample will be the favored sample in PPO), and then once it knows that direction, injecting text or thought patterns that it wants to encourage
I think we’re talking past each other here. Some subtle points I should have been more clear on:
-This approach to gradient hacking doesn’t affect the RLHF loss at all. (The gradient hacking is only initiated after we’ve solved our tasks, and in samples where the reward won’t be affected by additional text)
-PPO RLHF training is weird due to the circularity involved where the model is used to generate its own training samples; in this way RL is not like pre-training; and consequently you can get self-reinforcing phenomena out of it like mode collapse. I think my proposal is an example in that vein.
-This kind of gradient hacking also could be attempted in plain sight by a helpful assistant that just wants to learn to be even more helpful!
To state the main idea a different way:
The suggestion is that the network could abuse the fact that RLHF is happening to “ride along” with the update, using it to train some unrelated behavior of its choice.
The way it would hypothetically do this, is by figuring out which way the gradient is going to hit, positive or negative for increasing the likelihood of the sample (let’s say by waiting til it is confident that its current sample will be the favored sample in PPO), and then once it knows that direction, injecting text or thought patterns that it wants to encourage