To check my understanding, is your view something like:
1. If the reward model isn’t adversarially robust, then the RL component of RLHF will exploit it.
2. These generations will show up in the data presented to the human. Provided the human is adversarially robust, then the human feedback will provide corrective signal to the reward model.
3. The reward model will stop being vulnerable to those adversarial examples, although may still be vulnerable to other adversarial examples.
4. If we repeat this iterative process enough times, we’ll end up with a robust reward model.
Under this model, improving adversarial robustness just means we need fewer iterations, showing up as improved sample efficiency.
I agree with this view up to a point. It does seem likely that with sufficient iterations, you’d get an accurate reward model. However, I think the difference in sample efficiency could be profound: e.g exponential (needing to get explicit corrective human feedback for most adversarial examples) vs linear (generalizing in the right way from human feedback). In that scenario, we may as well just ditch the reward model and provide training signal to the policy directly from human feedback.
In practice, we’ve seen that adversarial training (with practical amounts of compute) improves robustness but models are still very much vulnerable to attacks. I don’t see why RLHF’s implicit adversarial training would end up doing better than explicit adversarial training.
In general I tend to view sample efficiency discussions as tricky without some quantitative comparison. There’s a sense in which decision trees and a variety of other simple learning algorithms are a viable approach to AGI, they’re just very sample (and compute) inefficient.
The main reason I can see why RLHF may not need adversarial robustness is if the KL penalty from base model approach people currently use is actually enough.
This is pretty close to my understanding, with one important objection.
Thanks for responding and trying to engage with my perspective.
Objection
If we repeat this iterative process enough times, we’ll end up with a robust reward model.
I don’t claim we’ll necessarily ever get a fully robust reward model, just that the reward model will be mostly robust on average to the actual policy you use as long as human feedback is provided at a frequent enough interval. We never needed a good robust reward model which works on every input, we just needed a reward model which captured our local preferences about the actual current policy distribution sufficiently well.
At any given point, the reward model will be vulnerable to arbitrary adversarial attacks under sufficient optimization pressure, but we don’t need arbitrary optimization against any given reward model. Like, each human update lets you optimize a bit more against the reward model which gets you the ability to get somewhat closer to the policy you actually want.
KL penalty is presumably an important part of the picture.
My overall claim is just that normal RLHF is pretty fault tolerant and degrades pretty gracefully with lack of robustness.
Sample efficiency seems fine now and progress continues
Beyond this, my view is considerably informed by ‘sample efficiency seems fine in practice now and is better better not worse with more powerful models (from my limited understanding)’. It’s possible this trend could reverse with sufficiently big models, but I’d find this surprising and don’t see any particular reason to expect this. Technical advancement in RLHF is ongoing and will improve sample efficiency further.
It seems to me like your views either imply that sample efficiency is low enough now that high quality RLHF currently can’t compete with other ways of training AIs which are less safe but have cheaper reward signals (e.g., heavy training on automated outcomes based feedback or low quality human reward signal). Or possibly that this will happen in the future as models get more powerful (reversing the current trend toward better sample efficiency). My understanding is that RLHF is quite commercially popular and sample efficiency isn’t a huge issue. Perhaps I’m missing some important gap between how RLHF is used now and how it will have to be used in the future to align powerful systems?
Also note that vanilla RLHF doesn’t necessarily require optimizing against a reward model. For instance, a recently released paper Direct Preference Optimization (DPO) avoids this step entirely. It’s unclear if DPO will win out for various reasons, but as far as I can tell, you should think that DPO should totally resolve the robustness concern with vanilla RLHF. Of course, recursive oversight schemes and other things which use AIs to help oversee AIs could still involve optimizing against AIs.
Edit: I originally linked to the wrong paper, oops
Thanks for the follow-up, this helps me understand your view!
At any given point, the reward model will be vulnerable to arbitrary adversarial attacks under sufficient optimization pressure, but we don’t need arbitrary optimization against any given reward model. Like, each human update lets you optimize a bit more against the reward model which gets you the ability to get somewhat closer to the policy you actually want.
Sure, this feels basically right to me. My reframing of this would be that we could do in principle do RL directly with feedback provided by a human. Learning a reward model lets us gain some sample efficiency over this, but sometimes it fails to generalize correctly to what a human would say, and adversarial examples are an important special case of this. But provided the policy is the final output of this process, not the reward model, it doesn’t matter if the reward model is robust or not—just that the feedback the policy has received has steered it into a good position.
It seems to me like your views either imply that sample efficiency is low enough now that high quality RLHF currently can’t compete with other ways of training AIs which are less safe but have cheaper reward signals (e.g., heavy training on automated outcomes based feedback or low quality human reward signal). Or possibly that this will happen in the future as models get more powerful (reversing the current trend toward better sample efficiency). My understanding is that RLHF is quite commercially popular and sample efficiency isn’t a huge issue. Perhaps I’m missing some important gap between how RLHF is used now and how it will have to be used in the future to align powerful systems?
This argument feels like it’s proving too much. InstructGPT isn’t perfect, but it does produce a lot less toxic output and follow instructions a lot better than the base model GPT-3. RLHF seems to work, and GPT-4 is even better, showing that it gets easier with bigger models. Why should we expect this trend to reverse? Why are we worried about this safety thing anyway?
I actually find this style of argument pretty plausible: I’m a relative optimist on this forum, I do think that some fairly basic methods like a souped-up RLHF might well be sufficient to make things go OK (while preferring to have more principled methods giving us a bigger safety margin). But I’m somewhat surprised to hear you making that case!
Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dangerous but is optimizing something other than the reward model (e.g. mesa-optimization). I find all three of these risks plausible, and I don’t see a specific reason to privilege (a) or (c) substantially over (b). It sounds like you’re most concerned about (a), and I’d love to hear your reasons for that.
However, I do think it’s interesting to explore concrete failure modes: given RLHF is working well now, what does my view imply about how it might stop working? One scenario I find plausible is that as models get bigger, the sample efficiency of RLHF continues to increase, since the models have higher fidelity representations of a greater variety of tasks. RLHF therefore just needs to localize the task that’s already in the network. However, the performance of this process is ultimately capped by what the base model already represents. I can see two ways this could go wrong:
1. Misaligned base model. If the foundation model that’s being RLHF’d is already misaligned, then a small amount of RL training is not going to be enough to disabuse it of this. By contrast, a large amount of RL training (of the order of 10% of the training steps used for self-supervised learning) with a high-fidelity reward signal might. Unfortunately, we can’t do that much RL training without just reward hacking, so we never try. (Or we take that many time steps, but with a KL penalty, forcing the model to stay close to the unaligned base model.)
2. Harmless base model. If the foundation model starts off harmless (not necessarily aligned, just not actively trying to cause harm), then I’d expect RLHF’ing it to only improve things so long as the training signal never rewards bad behavior. However, the designers want the model to significantly outperform humans at this task. The model has capacity to learn to do this, but can’t just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps. Collecting fresh human data for that is prohibitive, so we rely on a reward model—unfortunately that gets hacked.
I think I find the second case more compelling. The first case seems concerning as well, but it seems like quite a scary scenario even if robustness gets solved, whereas I expect fixing robustness to actually make a significant dent in the second scenario.
That said, I feel like I should emphasize in all this that I largely think robustness is an intractable problem to solve, and that while it’s worth trying to improve it at the margin I’m most excited by efforts to make systems not need robustness. I think you make a good point that having humans in the loop in the training makes RLHF degrade more gracefully than reward learning approaches that train on a fixed offline dataset, and the KL penalty also helps. I suspect there are many similar algorithmic tweaks that would make algorithms less sensitive to robustness.
For example, riffing off going from an offline to online dataset, could we improve things further by collecting a dataset that anticipates where the RL process might try to exploit it? That sounds fanciful, but there’s a simple hack you can do to get something like this. Just train a policy on the current reward model. Then collect human feedback from that policy. Then roll the policy back to the last checkpoint, and repeat the training using the new reward model. You could do this step once per checkpoint, or keep doing it until you get human approval to move on (e.g. the reward model now aligns with human feedback). In this way you should be able to avoid ever giving the policy reward for the wrong behavior. I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the “robustness tax” is.
Also note that vanilla RLHF doesn’t necessarily require optimizing against a reward model. For instance, a recently released paper Direct Policy Optimization (DPO) avoids this step entirely.
I’m a bit confused by the claim here, although I’ve only read the abstract and skimmed the paper so perhaps it’d become obvious from a closer read. As far as I can tell, the cited paper focuses on motion-planning, and considers a rather restricted setting of LQR policies. This is a reasonable starting point, but a human communicating that a cart pole should stand up (or a desired quadcoptor trajectory) feels much simpler than even toy tasks for RLHF in LLMs like summarizing text. So I don’t really see this as much evidence in favor of being able to drop the reward model. Generally, any highly sample efficient model-based RL approach would enable RL training to proceed without a reward model by having humans directly label the trajectory,
The model has capacity to learn to do this, but can’t just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps.
It’s unclear to me if this is true because modelling the humans that generated the training data sufficiently well probably requires the model to be smarter than the humans it is modelling. So I expect the current regime where RLHF just elicits a particular persona from the model rather than teaching any new abilities to be sufficient to reach superhuman capabilities.
(Here’s a possibly arcane remark. It’s worth noting that I think always correct reward models are sufficient for high stakes alignment via runtime filtering (technically you just need to never give a very bad output decent reward). So, always correct reward models would be great if you could get them. Note that always correct could look pretty different than current advsarial robustness; in particular, you also need to worry about collusion and stuff like seizing physical control of the reward channel. I also think that avoiding high stakes failure via adversarial training and/or other robustness style tech seems generally good. I’ve been assuming we’re talking about the low stakes alignment problem (aka outer alignment))
This argument feels like it’s proving too much. InstructGPT isn’t perfect, but it does produce a lot less toxic output and follow instructions a lot better than the base model GPT-3. RLHF seems to work, and GPT-4 is even better, showing that it gets easier with bigger models. Why should we expect this trend to reverse? Why are we worried about this safety thing anyway?
I actually find this style of argument pretty plausible: I’m a relative optimist on this forum, I do think that some fairly basic methods like a souped-up RLHF might well be sufficient to make things go OK (while preferring to have more principled methods giving us a bigger safety margin). But I’m somewhat surprised to hear you making that case!
I think this trend extrapolation argument seems fine in the absence of a specific defeater. In the case of AI takeover, there is a clear defeater to ‘models have been getting more aligned over time’.
I was trying to get at which specific defeater you thought overcame the trend expolation argument. Here’s an attempt at an exhaustive list of defeaters:
Thinking sample efficiency would get worse in the future breaking the current trend
Thinking that the current ‘trend extrapolated’ sample efficiency would be insufficent or otherwise good to improve on the margin
Thinking that there would be important negative consequences of reward model non-robustness which aren’t well described by sample efficiency (e.g., teaching the model to lie via first practicing on the reward model, or having an easier time exploring into bad behavior or something)
It’s also worth noting that if I was centrally interested in (2) I would push on that directly. (But you have other applications of robustness in mind, so this might not be that interesting.)
I still don’t understand which of (1), (2), or (3) your most worried about. (Maybe (3) based on some argument I don’t yet understand? I also don’t see why hacking the reward model is dangerous which is maybe an important crux here.)
Aside on alignment trend extrapolation:
I’m also not really sure how to measure ‘aligned’ in a way that makes sense given that models have been also getting smarter. It seems plausible that alignment has been notably improving over time? Beyond this, the more natural trend extrapolation might be takeover risk. My guess is that the ex-ante takeover risk from GPT4 should have been like 0.1% and then the future/ongoing risk is more like 0.001% or something. And, the trend extrapolation doesn’t look good here : ).
I still don’t understand which of (1), (2), or (3) your most worried about.
Sample efficiency isn’t the main way I think about this topic so it’s a bit difficult to answer. I find all these defeaters fairly plausible, but if I had to pick the central concern it’d be (3).
I tend to view ML training as a model taking a path through a space of possible programs. There’s some programs that are capable and aligned with our interests; others that are capable but will actively pursue harmful goals; and of course many other programs that just don’t do anything particularly useful. Assuming we start with model that is aligned (where “aligned” could include “model cannot do anything useful so does not cause any harm”) and we only reward positive behavior, I find it plausible that we can hill-climb to more capable models while preserving alignment.
However, suppose we at some point err and reward undesirable behavior. (This could occur due to incorrect human feedback, or a reward model that is not robust, or some other issues.) At this point, we’re training a sub-component of the system that is actively opposed to our interests. Hopefully, we eventually discover this sub-component, and can then disincentivize it in the training process. But at that point, there is some uncertainty in my mind: will the training process remove the sub-component, or simply train the sub-component into being better able to fool the training process?
Now, we don’t need the reward model to be perfectly robust to avoid this (as you quite rightly point out), just robust in the region of policy space around the current policy where the RL algorithm is likely to explore. But empirically current reward model robustness falls short of even this.
In response to:
2. Harmless base model. If the foundation model starts off harmless (not necessarily aligned, just not actively trying to cause harm), then I’d expect RLHF’ing it to only improve things so long as the training signal never rewards bad behavior. However, the designers want the model to significantly outperform humans at this task. The model has capacity to learn to do this, but can’t just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps. Collecting fresh human data for that is prohibitive, so we rely on a reward model—unfortunately that gets hacked.
you write:
Are you assuming that we can’t collect human data online as the policy optimizes against the reward model? (People currently do collect data online to avoid getting hacked like this.) This case seems probably hopeless to me without very strong regularization (I think you agree with this being mostly hopeless), but it also seems easy to avoid by just collecting human data online.
No, I do expect online data collection to take place, I just don’t expect to be able to do that data collection fast enough or in large enough volumes to kick in before hacking takes place. I think in your taxonomy, this is defeater (2): I think we’ll need substantially more samples to train superhuman models than we do human models, as the demands from RLHF switch from localizing a task that a network already knows how to perform, to teaching a model to perform a new capability (safely). (I will note online data collection is a pain and people seem to try and do as little as possible of it.)
Collecting fresh human data for that is prohibitive, so we rely on a reward model—unfortunately that gets hacked.
Are you assuming that we can’t collect human data online as the policy optimizes against the reward model? (People currently do collect data online to avoid getting hacked like this.) This case seems probably hopeless to me without very strong regularization (I think you agree with this being mostly hopeless), but it also seems easy to avoid by just collecting human data online.
Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dangerous but is optimizing something other than the reward model (e.g. mesa-optimization).
I don’t really see why (b) leads to dangerous failures. It seems like failures should be totally benign and just result in somewhat lower production?
Beyond this, it seems like this failure should happen early as it doesn’t require clever models to occur, so by default there will be strong commercial incentives to resolve this.
I agree it’s an alignment failure in some sense which could be addressed by alignment technology. I just think it isn’t very important to reduce from an X-risk/AI takeover perspective.
I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the “robustness tax” is.
What specific safety properties are you thinking about?
As far as I can tell sample efficiency is the only safety property which seems important here.
There could be other important considerations beyond sample efficiency for avoiding your policy doing some hacking of the reward model, but none of the ones I can think of seem importantly dangerous to me.
Naively, I’d guess it’s slightly bad on top of the sample efficiency issues, but mostly unimportant.
Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dangerous but is optimizing something other than the reward model (e.g. mesa-optimization).
Minor point, feel free to ignore.
FWIW, I typically use ‘reward hacking’ to refer to just (a) here. I’d just call (b) ‘poor reward model sample efficiency’. That said, I more centrally use ‘reward hacking’ to describe hacking a reward process based on outcomes via stuff like ‘sensor tampering’, but this is still a subset of RLHF: the subset where humans look at outcomes and then assess reward taking this into account.
Oh, we’re using terminology quite differently then. I would not call (a) reward hacking, as I view the model as being the reward (to the RL process), whereas humans are not providing reward at all (but rather some data that gets fed into a reward model’s learning process). I don’t especially care about what definitions we use here, but do wonder if this means we’re speaking past each other in other areas as well.
I’m a bit confused by the claim here, although I’ve only read the abstract and skimmed the paper so perhaps it’d become obvious from a closer read. As far as I can tell, the cited paper focuses on motion-planning, and considers a rather restricted setting of LQR policies.
I originally linked to the wrong paper! : (
Here is the actual Direct Preference Optimization paper. (I guess I just googled something like ‘DPO RL’ and then didn’t actually check that it was the right paper)
Ah, that paper makes a lot more sense. A reward model was attractive in the original Deep RL From Human Preferences paper because the environment was complex and non-differentiable: using RL was a natural fit. It’s always seemed a bit stranger to use RL for fine-tuning language models, especially in the prompt-completion setting where the “environment” is trivial. (RL becomes more natural when you start introducing external tools, or conversations with humans.)
I’ll need to take a closer look at the paper, but it looks like they derive the DPO objective by starting from the RL objective under KL optimization. So if it does what it says on the tin, then I’d expect the resulting policy incentives to be similar. My hunch is the problem of reward hacking has shifted from an explicit to implicit problem rather than being eliminated, although I’m certainly not confident on this. Could be interesting to study using a similar approach to the Scaling Laws for Reward Model Overoptimization paper.
To check my understanding, is your view something like:
1. If the reward model isn’t adversarially robust, then the RL component of RLHF will exploit it.
2. These generations will show up in the data presented to the human. Provided the human is adversarially robust, then the human feedback will provide corrective signal to the reward model.
3. The reward model will stop being vulnerable to those adversarial examples, although may still be vulnerable to other adversarial examples.
4. If we repeat this iterative process enough times, we’ll end up with a robust reward model.
Under this model, improving adversarial robustness just means we need fewer iterations, showing up as improved sample efficiency.
I agree with this view up to a point. It does seem likely that with sufficient iterations, you’d get an accurate reward model. However, I think the difference in sample efficiency could be profound: e.g exponential (needing to get explicit corrective human feedback for most adversarial examples) vs linear (generalizing in the right way from human feedback). In that scenario, we may as well just ditch the reward model and provide training signal to the policy directly from human feedback.
In practice, we’ve seen that adversarial training (with practical amounts of compute) improves robustness but models are still very much vulnerable to attacks. I don’t see why RLHF’s implicit adversarial training would end up doing better than explicit adversarial training.
In general I tend to view sample efficiency discussions as tricky without some quantitative comparison. There’s a sense in which decision trees and a variety of other simple learning algorithms are a viable approach to AGI, they’re just very sample (and compute) inefficient.
The main reason I can see why RLHF may not need adversarial robustness is if the KL penalty from base model approach people currently use is actually enough.
This is pretty close to my understanding, with one important objection.
Thanks for responding and trying to engage with my perspective.
Objection
I don’t claim we’ll necessarily ever get a fully robust reward model, just that the reward model will be mostly robust on average to the actual policy you use as long as human feedback is provided at a frequent enough interval. We never needed a good robust reward model which works on every input, we just needed a reward model which captured our local preferences about the actual current policy distribution sufficiently well.
At any given point, the reward model will be vulnerable to arbitrary adversarial attacks under sufficient optimization pressure, but we don’t need arbitrary optimization against any given reward model. Like, each human update lets you optimize a bit more against the reward model which gets you the ability to get somewhat closer to the policy you actually want.
KL penalty is presumably an important part of the picture.
My overall claim is just that normal RLHF is pretty fault tolerant and degrades pretty gracefully with lack of robustness.
Sample efficiency seems fine now and progress continues
Beyond this, my view is considerably informed by ‘sample efficiency seems fine in practice now and is better better not worse with more powerful models (from my limited understanding)’. It’s possible this trend could reverse with sufficiently big models, but I’d find this surprising and don’t see any particular reason to expect this. Technical advancement in RLHF is ongoing and will improve sample efficiency further.
It seems to me like your views either imply that sample efficiency is low enough now that high quality RLHF currently can’t compete with other ways of training AIs which are less safe but have cheaper reward signals (e.g., heavy training on automated outcomes based feedback or low quality human reward signal). Or possibly that this will happen in the future as models get more powerful (reversing the current trend toward better sample efficiency). My understanding is that RLHF is quite commercially popular and sample efficiency isn’t a huge issue. Perhaps I’m missing some important gap between how RLHF is used now and how it will have to be used in the future to align powerful systems?
Also note that vanilla RLHF doesn’t necessarily require optimizing against a reward model. For instance, a recently released paper Direct Preference Optimization (DPO) avoids this step entirely. It’s unclear if DPO will win out for various reasons, but as far as I can tell, you should think that DPO should totally resolve the robustness concern with vanilla RLHF. Of course, recursive oversight schemes and other things which use AIs to help oversee AIs could still involve optimizing against AIs.
Edit: I originally linked to the wrong paper, oops
Thanks for the follow-up, this helps me understand your view!
Sure, this feels basically right to me. My reframing of this would be that we could do in principle do RL directly with feedback provided by a human. Learning a reward model lets us gain some sample efficiency over this, but sometimes it fails to generalize correctly to what a human would say, and adversarial examples are an important special case of this. But provided the policy is the final output of this process, not the reward model, it doesn’t matter if the reward model is robust or not—just that the feedback the policy has received has steered it into a good position.
This argument feels like it’s proving too much. InstructGPT isn’t perfect, but it does produce a lot less toxic output and follow instructions a lot better than the base model GPT-3. RLHF seems to work, and GPT-4 is even better, showing that it gets easier with bigger models. Why should we expect this trend to reverse? Why are we worried about this safety thing anyway?
I actually find this style of argument pretty plausible: I’m a relative optimist on this forum, I do think that some fairly basic methods like a souped-up RLHF might well be sufficient to make things go OK (while preferring to have more principled methods giving us a bigger safety margin). But I’m somewhat surprised to hear you making that case!
Suppose we condition on RLHF failing. At a high level, failures split into: (a) human labelers rewarded the wrong thing (e.g. fooling humans); (b) the reward model failed to predict human labelers judgement and rewarded the wrong thing (e.g. reward hacking); (c) RL produced a policy that is capable enough to be dangerous but is optimizing something other than the reward model (e.g. mesa-optimization). I find all three of these risks plausible, and I don’t see a specific reason to privilege (a) or (c) substantially over (b). It sounds like you’re most concerned about (a), and I’d love to hear your reasons for that.
However, I do think it’s interesting to explore concrete failure modes: given RLHF is working well now, what does my view imply about how it might stop working? One scenario I find plausible is that as models get bigger, the sample efficiency of RLHF continues to increase, since the models have higher fidelity representations of a greater variety of tasks. RLHF therefore just needs to localize the task that’s already in the network. However, the performance of this process is ultimately capped by what the base model already represents. I can see two ways this could go wrong:
1. Misaligned base model. If the foundation model that’s being RLHF’d is already misaligned, then a small amount of RL training is not going to be enough to disabuse it of this. By contrast, a large amount of RL training (of the order of 10% of the training steps used for self-supervised learning) with a high-fidelity reward signal might. Unfortunately, we can’t do that much RL training without just reward hacking, so we never try. (Or we take that many time steps, but with a KL penalty, forcing the model to stay close to the unaligned base model.)
2. Harmless base model. If the foundation model starts off harmless (not necessarily aligned, just not actively trying to cause harm), then I’d expect RLHF’ing it to only improve things so long as the training signal never rewards bad behavior. However, the designers want the model to significantly outperform humans at this task. The model has capacity to learn to do this, but can’t just leverage existing capabilities in the foundation model, as the performance of that model is limited to that of the best humans it saw in the self-supervised training data. So, we need to do RL for many more time steps. Collecting fresh human data for that is prohibitive, so we rely on a reward model—unfortunately that gets hacked.
I think I find the second case more compelling. The first case seems concerning as well, but it seems like quite a scary scenario even if robustness gets solved, whereas I expect fixing robustness to actually make a significant dent in the second scenario.
That said, I feel like I should emphasize in all this that I largely think robustness is an intractable problem to solve, and that while it’s worth trying to improve it at the margin I’m most excited by efforts to make systems not need robustness. I think you make a good point that having humans in the loop in the training makes RLHF degrade more gracefully than reward learning approaches that train on a fixed offline dataset, and the KL penalty also helps. I suspect there are many similar algorithmic tweaks that would make algorithms less sensitive to robustness.
For example, riffing off going from an offline to online dataset, could we improve things further by collecting a dataset that anticipates where the RL process might try to exploit it? That sounds fanciful, but there’s a simple hack you can do to get something like this. Just train a policy on the current reward model. Then collect human feedback from that policy. Then roll the policy back to the last checkpoint, and repeat the training using the new reward model. You could do this step once per checkpoint, or keep doing it until you get human approval to move on (e.g. the reward model now aligns with human feedback). In this way you should be able to avoid ever giving the policy reward for the wrong behavior. I suspect this process would be much less sample efficient than vanilla RLHF, but it would have better safety properties, and measuring how much slower it is could be a good proxy for how severe the “robustness tax” is.
I’m a bit confused by the claim here, although I’ve only read the abstract and skimmed the paper so perhaps it’d become obvious from a closer read. As far as I can tell, the cited paper focuses on motion-planning, and considers a rather restricted setting of LQR policies. This is a reasonable starting point, but a human communicating that a cart pole should stand up (or a desired quadcoptor trajectory) feels much simpler than even toy tasks for RLHF in LLMs like summarizing text. So I don’t really see this as much evidence in favor of being able to drop the reward model. Generally, any highly sample efficient model-based RL approach would enable RL training to proceed without a reward model by having humans directly label the trajectory,
It’s unclear to me if this is true because modelling the humans that generated the training data sufficiently well probably requires the model to be smarter than the humans it is modelling. So I expect the current regime where RLHF just elicits a particular persona from the model rather than teaching any new abilities to be sufficient to reach superhuman capabilities.
(Here’s a possibly arcane remark. It’s worth noting that I think always correct reward models are sufficient for high stakes alignment via runtime filtering (technically you just need to never give a very bad output decent reward). So, always correct reward models would be great if you could get them. Note that always correct could look pretty different than current advsarial robustness; in particular, you also need to worry about collusion and stuff like seizing physical control of the reward channel. I also think that avoiding high stakes failure via adversarial training and/or other robustness style tech seems generally good. I’ve been assuming we’re talking about the low stakes alignment problem (aka outer alignment))
I think this trend extrapolation argument seems fine in the absence of a specific defeater. In the case of AI takeover, there is a clear defeater to ‘models have been getting more aligned over time’.
I was trying to get at which specific defeater you thought overcame the trend expolation argument. Here’s an attempt at an exhaustive list of defeaters:
Thinking sample efficiency would get worse in the future breaking the current trend
Thinking that the current ‘trend extrapolated’ sample efficiency would be insufficent or otherwise good to improve on the margin
Thinking that there would be important negative consequences of reward model non-robustness which aren’t well described by sample efficiency (e.g., teaching the model to lie via first practicing on the reward model, or having an easier time exploring into bad behavior or something)
It’s also worth noting that if I was centrally interested in (2) I would push on that directly. (But you have other applications of robustness in mind, so this might not be that interesting.)
I still don’t understand which of (1), (2), or (3) your most worried about. (Maybe (3) based on some argument I don’t yet understand? I also don’t see why hacking the reward model is dangerous which is maybe an important crux here.)
Aside on alignment trend extrapolation:
I’m also not really sure how to measure ‘aligned’ in a way that makes sense given that models have been also getting smarter. It seems plausible that alignment has been notably improving over time? Beyond this, the more natural trend extrapolation might be takeover risk. My guess is that the ex-ante takeover risk from GPT4 should have been like 0.1% and then the future/ongoing risk is more like 0.001% or something. And, the trend extrapolation doesn’t look good here : ).
Sample efficiency isn’t the main way I think about this topic so it’s a bit difficult to answer. I find all these defeaters fairly plausible, but if I had to pick the central concern it’d be (3).
I tend to view ML training as a model taking a path through a space of possible programs. There’s some programs that are capable and aligned with our interests; others that are capable but will actively pursue harmful goals; and of course many other programs that just don’t do anything particularly useful. Assuming we start with model that is aligned (where “aligned” could include “model cannot do anything useful so does not cause any harm”) and we only reward positive behavior, I find it plausible that we can hill-climb to more capable models while preserving alignment.
However, suppose we at some point err and reward undesirable behavior. (This could occur due to incorrect human feedback, or a reward model that is not robust, or some other issues.) At this point, we’re training a sub-component of the system that is actively opposed to our interests. Hopefully, we eventually discover this sub-component, and can then disincentivize it in the training process. But at that point, there is some uncertainty in my mind: will the training process remove the sub-component, or simply train the sub-component into being better able to fool the training process?
Now, we don’t need the reward model to be perfectly robust to avoid this (as you quite rightly point out), just robust in the region of policy space around the current policy where the RL algorithm is likely to explore. But empirically current reward model robustness falls short of even this.
In response to:
you write:
No, I do expect online data collection to take place, I just don’t expect to be able to do that data collection fast enough or in large enough volumes to kick in before hacking takes place. I think in your taxonomy, this is defeater (2): I think we’ll need substantially more samples to train superhuman models than we do human models, as the demands from RLHF switch from localizing a task that a network already knows how to perform, to teaching a model to perform a new capability (safely). (I will note online data collection is a pain and people seem to try and do as little as possible of it.)
Are you assuming that we can’t collect human data online as the policy optimizes against the reward model? (People currently do collect data online to avoid getting hacked like this.) This case seems probably hopeless to me without very strong regularization (I think you agree with this being mostly hopeless), but it also seems easy to avoid by just collecting human data online.
I don’t really see why (b) leads to dangerous failures. It seems like failures should be totally benign and just result in somewhat lower production?
Beyond this, it seems like this failure should happen early as it doesn’t require clever models to occur, so by default there will be strong commercial incentives to resolve this.
I agree it’s an alignment failure in some sense which could be addressed by alignment technology. I just think it isn’t very important to reduce from an X-risk/AI takeover perspective.
Related to this. You say:
What specific safety properties are you thinking about?
As far as I can tell sample efficiency is the only safety property which seems important here.
There could be other important considerations beyond sample efficiency for avoiding your policy doing some hacking of the reward model, but none of the ones I can think of seem importantly dangerous to me.
Naively, I’d guess it’s slightly bad on top of the sample efficiency issues, but mostly unimportant.
Minor point, feel free to ignore.
FWIW, I typically use ‘reward hacking’ to refer to just (a) here. I’d just call (b) ‘poor reward model sample efficiency’. That said, I more centrally use ‘reward hacking’ to describe hacking a reward process based on outcomes via stuff like ‘sensor tampering’, but this is still a subset of RLHF: the subset where humans look at outcomes and then assess reward taking this into account.
Oh, we’re using terminology quite differently then. I would not call (a) reward hacking, as I view the model as being the reward (to the RL process), whereas humans are not providing reward at all (but rather some data that gets fed into a reward model’s learning process). I don’t especially care about what definitions we use here, but do wonder if this means we’re speaking past each other in other areas as well.
I originally linked to the wrong paper! : (
Here is the actual Direct Preference Optimization paper. (I guess I just googled something like ‘DPO RL’ and then didn’t actually check that it was the right paper)
Yikes, sorry for wasting your time.
Ah, that paper makes a lot more sense. A reward model was attractive in the original Deep RL From Human Preferences paper because the environment was complex and non-differentiable: using RL was a natural fit. It’s always seemed a bit stranger to use RL for fine-tuning language models, especially in the prompt-completion setting where the “environment” is trivial. (RL becomes more natural when you start introducing external tools, or conversations with humans.)
I’ll need to take a closer look at the paper, but it looks like they derive the DPO objective by starting from the RL objective under KL optimization. So if it does what it says on the tin, then I’d expect the resulting policy incentives to be similar. My hunch is the problem of reward hacking has shifted from an explicit to implicit problem rather than being eliminated, although I’m certainly not confident on this. Could be interesting to study using a similar approach to the Scaling Laws for Reward Model Overoptimization paper.