Counterarguing johnswentworth on RL from human feedback
johnswentworth recently wrote that “RL from human feedback is actively harmful to avoiding AI doom.” Piecing together things from his commentselsewhere. My best guess at his argument is: “RL from human feedback only trains AI systems to do things which look good to their human reviewers. Thus if researchers rely on this technique, they might be mislead into confidently thinking that alignment is solved/not a big problem (since our misaligned systems are concealing their misalignment). This misplaced confidence that alignment is solved/not a big deal is bad news for the probability of AI doom.”
I disagree with (this conception of) John’s argument; here are two counterarguments:
Whether “deceive humans” is the strategy RL agents actually learn seems like it should rely on empirical facts. John’s argument relies on the claim that AI systems trained with human feedback will probably learn to deceive their reviewers (rather than actually do a good job on the task). This seems like an empirical claim that ought to rely on facts about: (i) the relative difficulties of (a) deceiving humans, (b) correctly performing the task, and (c) evaluating proposed solutions to the task; (ii) how the AI system generalizes from training data (which in turn depends on the internal structure of the system). John seems to think that intelligent systems will have a much easier time with (a) than with (b), given our competence at (c); in fact so much easier that questions of inductive bias (ii) aren’t relevant. This doesn’t seem quite so overdetermined to me.[1] John and Paul Christiano discussed the above point in this thread; they did quite an impressive job whittling their disagreement down to the crux “evaluation isn’t easier than generation, and that claim is true regardless of how good you are at evaluation until you get basically perfect at it.”
RL from human feedback might be only one ingredient in a solution to alignment. (I haven’t seen this discussed elsewhere.) I don’t think many people expect RL from human feedback to solve alignment by itself—in particular, it only works for tasks that we are able to evaluate, which don’t include all tasks that we’d want an AGI to do. One hope is that we could use RL from human feedback (or other ML techniques) to build AI assistants which make it possible to evaluate more tasks and with higher accuracy; in other words, it might be the case that our ability to evaluate proposed solutions actually scales with our AI capabilities.[2] Another possible hope is that RL from human feedback + some sort of interpretability could help make sure that our RL agents aren’t pursuing strategies that look like deception (and then hope that our interpretability is good enough that “doesn’t look like deception” is a good proxy for “not actually deceptive”). There are probably also other things I haven’t thought of which you can combine RL from human feedback with. In any case, it might be worthwhile to work on RL from human feedback even if it can’t solve alignment by itself, provided you believe it might be moving us closer to the goal, with the remaining work being done by other alignment techniques.
Both of these above counterarguments assume that things go better than the worst case. (For example, RL from human feedback really is completely useless if you’re worried that your RL agent might be a deceptively aligned mesa-optimiser.) But it can still be worthwhile to work on alignment approaches which only help in non-worst-case worlds
My main intuition here is that AI systems will try to deceive us long before they’re good at it; we’ll catch these initial attempts and give them negative feedback. Then it seems plausible to me that they’ll just learn to generalize correctly from there (and also plausible that they won’t). In other words, it’s worth trying to align AI systems this way in case alignment is easier than we expected due to good luck with inductive biases.
Of course, you need your assistants to be aligned, but you could optimistically hope that you can bootstrap up from tasks which are so easy for you to evaluate that your AIs trained from human feedback are very likely to be aligned.
(For example, RL from human feedback really is completely useless if you’re worried that your RL agent might be a deceptively aligned mesa-optimiser.)
I think this is mostly what we are worried about though? RL agents are mesa-optimizers already, or if they aren’t, they eventually will be if they are smart/capable enough. Deceptive alignment is the main thing we all worry about, though not the only thing. Without deception, if we are careful we can hopefully notice misalignment and course-correct, and/or use our AIs to do useful alignment work for us.
(Meta: after I made this post, I realized that what I wrote was a little confusing on this point (because I was a little confused on this point). I’ve been hoping someone would call me out on it to give me an excuse to spend time writing up a clarification. So thanks!)
So, on my understanding, there are two distinct ways you can get deceptive behavior out of an AI system:
(1) You could have trained it with a mis-specified objective function. Then your AI might start Goodharting its given reward; if this reward function was learned via human feedback, this means doing things that seem good to humans, but might not actually be good. This deceptive behavior could even arise purely “by accident,” that is, without the AI being able to grasp deception or even having a world model that includes humans. My favorite example is the one mentioned in the challenges section here—a simulated robotic hand was trained to grasp a ball, but it instead learned to appear to grasp the ball.
(2) Even if you have perfectly specified your objective function, your model might be a deceptively-aligned mesa-optimizer with a completely unrelated mesa-objective. (Aside: this is what makes mesa-optimizers terrifying to me—they imply that even if we were able to perfectly specify human values, we might still all die because the algorithms we trained to maximize human values ended up finding mis-aligned mesa-optimizers instead.)
In other words, (2) is the inner alignment failure and if you’re worried about it you think hard about the probability of mesa-optimizers arising; (1) is the outer alignment failure and if you’re worried about it I guess you argue a lot about air conditioners.
I’m pretty sure John was worried about (1), because if he were worried about (2) he would have said that all outer alignment research, not just RL from human feedback, is actively harmful to avoiding AI doom. (And FWIW this stronger claim seems super wrong to me, and I expect it also seems wrong to John and most other people.)
Thanks for the explanation (upvoted). I don’t really understand it though, it seems like a straw man. At any rate I’m not now interested in exegesis on John, I want to think about the arguments and claims in their own right.
What would you say is the main benefit from the RL from Human Feedback research so far? What would have happened if the authors had instead worked on a different project?
What would you say is the main benefit from the RL from Human Feedback research so far? What would have happened if the authors had instead worked on a different project?
I feel like these questions are a little tricky to answer, so instead I’ll attempt to answer the questions “What is the case for RL from human feedback (RLFHF) helping with alignment?” and “What have we learned from RLFHF research so far?”
What is the case for RLFHF helping with alignment?
(The answer will mainly be me repeating the stuff I said in my OP, but at more length.)
The most naive case for RLFHF is “you train some RL agent to assist you, giving it positive feedback when it does stuff you like and negative feedback for stuff you don’t like. Eventually it learns to model your preferences well and is able to only do stuff you like.”
The immediate objections that come to mind are:
(1) The RL agent is really learning to do stuff that lead you to giving it positive feedback (which is an imperfect proxy for “stuff you like.”) Won’t this lead to the RL agent manipulating us/replacing us with robots that always report they’re happy/otherwise Goodharting their reward function?
(2) This can only train an RL agent to do tasks that we can evaluate. What about tasks we can’t evaluate? For example, if you tell your RL agent to write macroeconomic policy proposal, we might not be able to give it feedback on whether its proposal is good or not (because we’re not smart enough to evaluate macroeconomic policy), which sinks the entire RLFHF method.
(3) A bunch of other less central concerns that I’ll relegate to footnotes.[1][2][3]
My response to objection (1) is … well at this point I’m really getting into “repeat myself from the OP” territory. Basically, I think this is a valid objection, but
(a) if the RL agent’s reward model is very accurate, it’s not obviously true that the easiest way for it to optimize for its reward is to do deceptive/Goodhart-y stuff; this feels like it should rely on empirical facts like the ones I mentioned in the OP.
(b) even if the naive approach doesn’t work because of this objection, we might be able to do other stuff on top of RLFHF (e.g. interpretability, something else we haven’t thought of yet) to penalize Goodhart-y behavior or prevent it from arising in the first place.
The obvious counterargument here is “Look, Sam, you clearly are just not appreciating how much smarter than you a superintelligence will be. Inevitably there will be some way to Goodhart the reward function to get more reward than ‘just do what we want’ would give, and no technique you come up with of trying to penalize this behavior will stop the AI from finding and exploiting this strategy.” To which I have further responses, but I think I’ll resist going further down the conversational tree.
Objection (2) above is a good one, but seems potentially surmountable to me. Namely, it seems that there might be ways to use AI to improve our ability to evaluate things. The simplest form of this is recursive reward modelling: suppose you want to use RLFHF to train an AI to do task X but task X is difficult/expensive to evaluate; instead you break “evaluate X” into a bunch of easy-to-evaluate subtasks, and train RL agents to help with those; now you’re able to more cheaply evaluate X.
In summary, the story about how RLFHF helps with alignment is “if we’re very lucky, naive RLFHF might produced aligned agents; if we’re less lucky, RLFHF + another alignment technique might still suffice.”
What have we learned from RLFHF research so far?
Here’s some stuff that I’m aware of; probably there’s a bunch of takeaways that I’m not aware of yet.
(1) Learning to Summarize from Human Feedback didn’t do recursive reward modelling as I described it above, but it did a close cousin: instead of breaking “evaluate X” up into subtasks it broke the original task X up into a bunch of subtasks which were easier to evaluate. In this case X = “summarize a book” and the subtasks were “evaluate small chunks of text.” I’m not sure how to feel about the result—the summaries were merely okay. But if you believe RLFHF could be useful as one ingredient in alignment, then further research on whether you could get this to work would seem valuable to me.
(2) Redwood’s original research project used RLFHF (at least, I think so[4]) to train an RL agent to generate text completions in which no human was portrayed as being injured. [EDIT: since I wrote this comment Redwood’s report came out. It doesn’t look like they did the RLHF part? Rather it seems like they just did the classifier part, and generated non-injurious text completions by generating a bunch of completions and filtering out the injurious ones.] Their goal was to make the RL agent very rarely(like 10^-30 of the time) generate injurious completions. I heard through the grapevine that they were not able to get such a low error rate, which is some evidence that … something? That modelling the way humans classify things with ML is hard? That distributional shift is a big deal? I’m not sure, but whatever it is it’s probably weak evidence against the usefulness of RLFHF.
(3) On the other hand, some of the original work showed that RLFHF seems to have really good sample efficiency, e.g. the agent at the top of this page learned to do a backflip with just 900 bits of human feedback. That seems good to know, and makes me think that if value learning is going to happen at all, it will happen via RLFHF.
From your original question, it seems like what you really want to know is “how does this usefulness of this research compare to the usefulness of other alignment research?” Probably that largely depends on whether you believe the basic story for how RLFHF could be useful (as well as how valuable you think other threads of alignment research are).
Q: When we first turn on the RL agent—when it hasn’t yet received much human feedback and therefore has a very inaccurate model of human preferences—won’t the agent potentially do lots of really bad things? A: Yeah, this seems plausible, but it might not be an insurmountable challenge. For instance, we could pre-train the agent’s reward model from a bunch of training runs controlled by a human operator or a less intelligent RL agent. Or maybe the people who are studying safe exploration will come up with something useful here.
Q: What about robustness to distributional shift? That is, even if our RL agent learns a good model of human preferences under ordinary circumstances, its model might be trash once things start to get weird, e.g. once we start colonizing space. A: One thing about RLFHF is that you generally shouldn’t take the reward model offline, i.e. you should always continue giving the RL agent some amount of feedback on which the reward model continuously trains. So maybe if things get continuously weirder then our RL agents’ model of human preferences will continuously learn and we’ll be fine? Otherwise, I mainly want to ignore robustness to distributional shift because it’s an issue shared by all potential outer alignment solutions that I know of. No matter what approach to alignment you take, you need to hope that either someone else solves this issue or that it ends up not being a big deal for some reason.
What about mesa-optimizers? Like in footnote 2, this is an issue for every potential alignment solution, and I’m mainly hoping that either someone solves it or it ends up not being a big deal.
Their write up of the project, consisting of step 1 (train a classifier for text that portrays injury to humans) and step 2 (use the classifier to get an RL agent to generate non-injurious text completions), makes it sounds like they stop training the classifier once they start training the RL agent. This is like doing RLFHF where you take the reward model offline, which on my understanding tends to produce bad results. So I’m guessing that actually they never took the classifier offline, in which case what they did is just vanilla RLFHF.
Thanks for the detailed answer, I am sheepish to have prompted so much effort on your part!
I guess what I was and am thinking was something like “Of course we’ll be using human feedback in our reward signal. Big AI companies will do this by default. Obviously they’ll train it to do what they want it to do and not what they don’t want it to do. The reason we are worried about AI risk is because we think that this won’t be enough.”
To which someone might respond “But still it’s good to practice doing it now. The experience might come in handy later when we are trying to align really powerful systems.”
To which I might respond “OK, but I feel like it’s a better use of our limited research time to try to anticipate ways in which RL from human feedback could turn out to be insufficient and then do research aimed at overcoming those ways. E.g. think about inner alignment problems, think about it possibly learning to do what makes us give positive feedback rather than what we actually want, etc. Let the capabilities researchers figure out how to do RL from human feedback, since they need to figure that out anyway on the path to deploying the products they are building. Safety researchers should focus on solving the problems that we anticipate RLHF doesn’t solve by itself.”
I don’t actually think this, because I haven’t thought about this much, so I’m uncertain and mostly deferring to other’s judgment. But I’d be interested to hear your thoughts! (You’ve written so much already, no need to actually reply)
Ah cool, I see—your concern is that maybe RLHF is perhaps better left to the capabilities people, freeing up AI safety researchers to work on more neglected approaches.
That seems right to me, and I agree with it as a general heuristic! Some caveats:
I’m random person who’s been learning a lot about this stuff lately, definitely not an active researcher. So my opinions about heuristics for what to work on probably aren’t worth much.
If you think RLHF research could be very impactful for alignment, that could make up for it being less neglected than other areas.
Distinctive approaches to RLHF (like Redwood’s attempts to get their reward model’s error extremely low) might be the sorts of things that capabilities people wouldn’t try.
Finally, as a historical note, it’s hard to believe that a decade ago the state of alignment was like “holy shit, how could we possibly hard-code human values into a reward function this is gonna be impossible.” The fact that now we’re like “obviously big AI will, by default, build their AGIs with something like RLHF” is progress! And Paul’s comment elsethread is heartwarming—it implies that AI safety researchers helped accelerate the adoption of this safer-looking paradigm. In other words, if you believe RLHF helps improve our odds, then contra some recent pessimistic takes, you believe that progress is being made :)
We are moving rapidly from a world where people deploy manifestly unaligned models (where even talking about alignment barely makes sense) to people deploying models which are misaligned because (i) humans make mistakes in evaluation, (ii) there are high-stakes decisions so we can’t rely on average-case performance.
This seems like a good thing to do if you want to move on to research addressing the problems in RLHF: (i) improving the quality of the evaluations (e.g. by using AI assistance), and (ii) handling high-stakes objective misgeneralization (e.g. by adversarial training).
In addition to “doing the basic thing before the more complicated thing intended to address its failures,” it’s also the case that RLHF is a building block in the more complicated things.
I think that (a) there is a good chance that these boring approaches will work well enough to buy (a significant amount) time for humans or superhuman AIs to make progress on alignment research or coordination, (b) when they fail, there is a good chance that their failures can be productively studied and addressed.
Overall it seems to me like the story here is reasonably good and has worked out reasonably well in practice. I think RLHF is being adopted more quickly than it otherwise would, and plenty of follow-up work is being done. I think many people in labs have a better understanding of what the remaining problems in alignment are; as a result they are significantly more likely to work productively on those problems themselves or to recognize and adopt solutions from elsewhere.
OK, thanks. I’m new to this debate, I take it I’m wandering in to a discussion that may already have been had to death.
I guess I’m worried that RLHF should basically be thought of as capabilities research instead of alignment/safety research. The rationale for this would be: Big companies will do RLHF before the end by default, since their products will embarrass them otherwise. By doing RLHF now and promoting it we help these companies get products to market sooner & free up their time to focus on other capabilities research.
I agree with your claims (a) and (b) but I don’t think they undermine this skeptical take, because I think that if RLHF fails the failures will be different for really powerful systems than for dumb systems.
I think it’d be useful if you spelled out those failures you think will occur in powerful systems, that won’t occur in any intermediate system (assuming some degree of slowness sufficient to allow real world deployment of not-yet-AGI agentic models).
For example, deception: lots of parts of the animal kingdom understand the concept of “hiding” or “lying in wait to strike”, I think? It already showed up in XLand IIRC. Imagine a chatbot trying to make a sale—avoiding problematic details of the product it’s selling seems like a dominant strategy.
There are definitely scarier failure modes that show up in even-more-powerful systems (e.g. actual honest-to-goodness long-term pretending to be harmless in order to end up in situations with more resources, which will never be caught with RLHF), and I agree pure alignment researchers should be focusing on those. But the suggestion that picking the low-hanging fruit won’t build momentum for working on the hardest problems does seem wrong to me.
As another example, consider the Beijing Academy of AI’s government-academia-industry LLM partnership. When their LLMs fail to do what they want, they’ll try RLHF—and it’ll kind of work, but then it’ll fail in a bunch of situations. They’ll be forced to confront the fact that actually, objective robustness is a real thing, and start funding research/taking proto-alignment research way more seriously/as being on the critical path to useful models. Wouldn’t it be great if there were a whole literature waiting for them on all the other things that empirically go wrong with RLHF, up to and including genuine inner misalignment concerns, once they get there?
Thanks! I take the point about animals and deception.
Wouldn’t it be great if there were a whole literature waiting for them on all the other things that empirically go wrong with RLHF, up to and including genuine inner misalignment concerns, once they get there?
Insofar as the pitch for RLHF is “Yes tech companies are going to do this anyway, but if we do it first then we can gain prestige, people will cite us, etc. and so people will turn to us for advice on the subject later, and then we’ll be able to warn them of the dangers” then actually that makes a lot of sense to me, thanks. I still worry that the effect size might be too small to be worth it, but idk.
I don’t think that there are failures that will occur in powerful systems that won’t occur in any intermediate system. However I’m skeptical that the failures that will occur in powerful systems will also occur in today’s systems. I must say I’m super uncertain about all of this and haven’t thought about it very much.
With that preamble aside, here is some wild speculation:
--Current systems (hopefully?) aren’t reasoning strategically about how to achieve goals & then executing on that reasoning. (You can via prompting get GPT-3 to reason strategically about how to achieve goals… but as far as we know it isn’t doing reasoning like that internally when choosing what tokens to output. Hopefully.) So, the classic worry of “the AI will realize that it needs to play nice in training so that it can do a treacherous turn later in deployment” just doesn’t apply to current systems. (Hopefully.) So if we see e.g. our current GPT-3 chatbot being deceptive about a product it is selling, we can happily train it to not do that and probably it’ll just genuinely learn to be more honest. But if it had strategic awareness and goal-directedness, it would instead learn to be less honest; it would learn to conceal its true intentions from its overseers.
--As humans grow up and learn more and (in some cases) do philosophy they undergo major shifts in how they view the world. This often causes them to change their minds about things they previously learned. For example, maybe at some point they learned to go to church because that’s what good people do because that’s what God says; later on they stop believing in God and stop going to church. And then later still they do some philosophy and adopt some weird ethical theory like utilitarianism and their behavior changes accordingly. Well, what if AIs undergo similar ontological shifts as they get smarter? Then maybe the stuff that works at one level of intelligence will stop working at another. (e.g. telling a kid that God is watching them and He says they should go to church stops working. Later when they become a utilitarian, telling them that killing civilians is murder and murder is wrong stops working too (if they are in a circumstance where the utilitarian calculus says civilian casualties are worth it for the greater good)).
I agree that “concealing intentions from overseers” might be a fairly late-game property, but it’s not totally obvious to me that it doesn’t become a problem sooner. If a chatbot realizes it’s dealing with a disagreeable person and therefore that it’s more likely to be inspected, and thus hews closer to what it thinks the true objective might be, the difference in behaviors should be pretty noticeable.
Re: ontology mismatch, this seems super likely to happen at lower levels of intelligence. E.g. I’d bet this even sometimes occurs in today’s model-based RL, as it’s trained for long enough that its world model changes. If we don’t come up with strategies for dealing with this dynamically, we aren’t going to be able to build anything with a world model that improves over time. Maybe that only happens too close to FOOM, but if you believe in a gradual-ish takeoff it seems plausible to have vanilla model-based RL work decently well before.
We are moving rapidly from a world where people deploy manifestly unaligned models (where even talking about alignment barely makes sense) to people deploying models which are misaligned because (i) humans make mistakes in evaluation, (ii) there are high-stakes decisions so we can’t rely on average-case performance.
What it feels like to me is that we are rapidly moving from a world where people deploy manifestly unaligned models to people deploying models which are still manifestly unaligned (where even talking about alignment barely makes sense), but which are getting differentially good at human modeling and deception (and maybe at supervising other AIs, which is where the hope comes from).
I don’t think the models are misaligned because humans are making mistakes in evaluation. The models are misaligned because we have made no progress at actually pointing towards anything like human values or other concepts like corrigibility or myopia.
In other words, models are mostly misaligned because there are strong instrumental convergent incentives towards agency, and we don’t currently have any tools that allow us to shape the type of optimization that artificial systems are doing internally. Learning from human feedback seems if anything to be slightly more the kind of reward that incentivizes dangerous agency. This seems to fit neither into your (1) or (2).
Instruct-GPT is not more aligned than GPT-3. It is more capable at performing many tasks, and we have some hope that some of the tasks at which it is getting better might help with AI Alignment down the line, but right now, at the current state of the AI alignment field, the problem is not that we can’t provide good enough evaluation, or that we can only get good “average-case” performance, it’s that we have systems with random goals that are very far from human values or are capable of being reliably conservative.
And additionally to that, we now have a tool that allows any AI company to trivially train away any surface-level alignment problems, without addressing any of the actual underlying issues, creating a situation with very strong incentives towards learning human deception and manipulation, and a situation where obvious alignment failures are much less likely to surface.
My guess is you are trying to point towards a much more sophisticated and broader thing by your (2) than I interpret you as saying here, but the above is my response to my best interpretation of what you mean by (2).
In other words, models are mostly misaligned because there are strong instrumental convergent incentives towards agency, and we don’t currently have any tools that allow us to shape the type of optimization that artificial systems are doing internally.
In the context of my comment, this appears to be an empirical claim about GPT-3. Is that right? (Otherwise I’m not sure what you are saying.)
If so, I don’t think this is right. On typical inputs I don’t think GPT-3 is instrumentally behaving well on the training distribution because it has a model fo the data-generating process.
I think on distribution you are mostly getting good behavior mostly either by not optimizing, or by optimizing for something we want. I think to the extent it’s malign it’s because there are possible inputs on which it is optimizing for something you don’t want, but those inputs are unlike those that appear in training and you have objective misgeneralization.
In that regime, I think the on-distribution performance is probably aligned and there is not much in-principle obstruction to using adversarial training to improve the robustness of alignment.
Instruct-GPT is not more aligned than GPT-3. It is more capable at performing many tasks, and we have some hope that some of the tasks at which it is getting better might help with AI Alignment down the lin
Could you define the word “alignment” as you are using it?
I’m using roughly the definition here. I think it’s the case that there are many inputs where GPT-3 is not trying to do what you want, but Instruct-GPT is. Indeed, I think Instruct-GPT is actually mostly trying to do what you want to the extent that it is trying to do anything at all. That would lead me to say it is more “aligned.”
I agree there are subtleties like “If I ask instruct-gpt to summarize a story, is it trying to summarize the story? Or trying to use that as evidence about ‘what Paul wants’ and then do that?” And I agree there is a real sense in which it isn’t smart enough for that distinction to be consistently meaningful, and so in that sense you might say my definition of intent alignment doesn’t really apply. (I more often think about models being “benign” or “malign,” more like asking: is it trying to optimize for something despite knowing that you wouldn’t like it.) I don’t think that’s what you are talking about here though.
right now, at the current state of the AI alignment field, the problem is not that we can’t provide good enough evaluation, or that we can only get good “average-case” performance, it’s that we have systems with random goals that are very far from human values or are capable of being reliably conservative.
If you have good oversight, I think you probably get good average case alignment. That’s ultimately an empirical claim about what happens when you do SGD, but the on-paper arguments looks quite good (namely: on-distribution alignment would improve the on-distribution performance and seems easy for SGD to learn relative to the complexity of the model itself) and it appears to match the data so far to the extent we have relevant data.
You seem to be confidently stating it’s false without engaging at all with the argument in favor or presenting or engaging with any empirical evidence.
You seem to be confidently stating it’s false without engaging at all with the argument in favor or presenting or engaging with any empirical evidence.
But which argument in favor did you present? You just said “the models are unaligned for these 2 reasons”, when those reasons do not seem comprehensive to me, and you did not give any justification for why those two reasons are comprehensive (or provide any links).
I tried to give a number of specific alternative reasons that do not seem to be covered by either of your two cases, and included a statement that we might disagree on definitional grounds, but that I don’t actually know what definitions you are using, and so can’t be confident that my critique makes sense.
Now that you’ve provided a definition, I still think what I said holds. My guess is there is a large inferential distance here, so I don’t think it makes sense to try to bridge that whole distance within this comment thread, though I will provide an additional round of responses.
If so, I don’t think this is right. On typical inputs I don’t think GPT-3 is instrumentally behaving well on the training distribution because it has a model fo the data-generating process.
I don’t think your definition of intent-alignment requires any unaligned system to have a model of the data-generating process, so I don’t understand the relevance of this. GPT-3 is not unaligned because it has a model of the data-generating process, and I didn’t claim that.
I did claim that neither GPT-3 nor Instruct-GPT are “trying to do what the operator wants it to do”, according to your definition, and that the primary reason for that is that in as much as its training process did produce a model that has “goals” and so can be modeled in any consequentialist terms, those “goals” do not match up with trying to be helpful to the operator. Most likely, they are a pretty messy objective we don’t really understand (which in the case of GPT-3 might be best described as “trying to generate text that in some simple latent space resembles the training distribution” and I don’t have any short description of what the “goals” of Instruct-GPT might be, though my guess is they are still pretty close to GPT-3s goals).
Indeed, I think Instruct-GPT is actually mostly trying to do what you want to the extent that it is trying to do anything at all. That would lead me to say it is more “aligned.”
I don’t think we know what Instruct-GPT is “trying to do”, and it seems unlikely to me that it is “trying to do what I want”. I agree in some sense it is “more trying to do what I want”, though not in a way that feels obviously very relevant to more capable systems, and not in a way that aligns very well with your intent definition (I feel like if I had to apply your linked definition to Instruct-GPT, I would say something like “ok, seems like it isn’t intent aligned, since the system doesn’t really seem to have much of an intent. And if there is a mechanism in its inner workings that corresponds to intent, we have no idea what thing it is pointed at, so probably it isn’t pointed at the right thing”).
And in either case, even if it is the case that if you squint your eyes a lot the system is “more aligned”, this doesn’t make the sentence “many of today’s systems are aligned unless humans make mistakes in evaluation or are deployed in high-stakes environments” true. “More aligned” is not equal to “aligned”.
The correct sentence seems to me “many of those systems are still mostly unaligned, but might be slightly more aligned than previous systems, though we have some hope that with better evaluation we can push that even further, and the misalignment problems are less bad on lower-stakes problems when we can rely on average-case performance, though overall the difference in alignment between GPT and Instruct-GPT is pretty unclear and probably not very large”.
I think on distribution you are mostly getting good behavior mostly either by not optimizing, or by optimizing for something we want. I think to the extent it’s malign it’s because there are possible inputs on which it is optimizing for something you don’t want, but those inputs are unlike those that appear in training and you have objective misgeneralization.
This seems wrong to me. On-distribution it seems to me that the system is usually optimizing for something that I don’t want. For example, GPT-3 primarily is trying to generate text that represents the distribution that its drawn from, which very rarely aligns with what I want (and is why prompt-engineering has such a large effect, e.g. “you are Albert Einstein” as a prefix improves performance on many tasks). Instruct-GPT does a bit better here, but probably most of its internal optimization power is still thrown at reasoning with the primary “intention” of generating text that is similar to its input distribution, since it seems unlikely that the fine-tuning completely rewrote most of these internal heuristics.
My guess is if Instruct-GPT was intent-aligned even for low-impact tasks, we could get it to be substantially more useful on many tasks. But my guess is what we currently have is mostly a model that is still primarily “trying” to generate text that is similar to its training distribution, with a few heuristics baked in in the human-feedback stage that make that text more likely to be a good fit for the question asked. In as much as the model is “trying to do something”, i.e. what most of its internal optimization power is pointed at, I am very skeptical that that is aligned with my task.
(Similarly, looking at Redwood’s recent model, it seems clear to me that they did not produce a model that “intents” to produce non-injurious completions. The model has two parts, one that is just “trying” to generate text similar to its training distribution, and a second part that is “trying” to detect whether a completion is injurious. This model seems clearly not intent-aligned, since almost none of its optimization power is going towards our target objective.)
If you have good oversight, I think you probably get good average case alignment. That’s ultimately an empirical claim about what happens when you do SGD, but the on-paper arguments looks quite good (namely: on-distribution alignment would improve the on-distribution performance and seems easy for SGD to learn relative to the complexity of the model itself) and it appears to match the data so far to the extent we have relevant data.
My guess is a lot of work is done here by the term “average case alignment”, so I am not fully sure how to respond. I disagree that the on-paper argument looks quite good, though it depends a lot on how narrowly you define “on-distribution”. Given my arguments above, you must either mean something different from intent-alignment (since to me at least it seems clear that Redwood’s model is not intent-aligned), or disagree with me on whether systems like Redwood’s are intent-aligned, in which case I don’t really know how to consistently apply your intent-alignment definition.
I also feel particularly confused about the term “average case alignment”, combined with “intent-alignment”. I can ascribe goals at multiple different levels to a model, and my guess is we both agree that describing current systems as having intentions at all is kind of fraught, but in as much as a model has a coherent goal, it seems like that goal is pretty consistent between different prompts, and so I am confused why we should expect average case alignment to be very different from normal alignment. It seems that if I have a model that is trying to do something, then asking it multiple times, probably won’t make a difference to its intention (I think, I mean, again, this all feels very handwavy, which is part of the reason why it feels so wrong to me to describe current models as “aligned”).
I currently think that the main relevant similarities between Instruct-GPT and a model that is trying to kill you, are about errors of the overseer (i.e. bad outputs to which they would give a high reward) or high-stakes errors (i.e. bad outputs which can have catastrophic effects before they are corrected by fine-tuning).
I’m interested in other kinds of relevant similarities, since I think those would be exciting and productive things to research. I don’t think the framework “Instruct-GPT and GPT-3 e.g. copy patterns that they saw in the prompt, so they are ‘trying’ to predict the next word and hence are misaligned” is super useful, though I see where it’s coming from and agree that I started it by using the word “aligned”.
Relatedly, and contrary to my original comment, I do agree that there can be bad intentional behavior left over from pre-training. This is a big part what ML researchers are motivated by when they talk about improving the sample-efficiency of RLHF. I usually try to discourage people from working on this issue, because it seems like something that will predictably get better rather than worse as models improve (and I expect you are even less happy with it than I am).
I agree that there is a lot of inferential distance, and it doesn’t seem worth trying to close the gap here. I’ve tried to write down a fair amount about my views, and I’m always interested to read arguments / evidence / intuitions for more pessimistic conclusions.
Similarly, looking at Redwood’s recent model, it seems clear to me that they did not produce a model that “intents” to produce non-injurious completions.
I agree with this, though it’s unrelated to the stated motivation for that project or to its relationship to long-term risk.
I currently think that the main relevant similarities between Instruct-GPT and a model that is trying to kill you, are about errors of the overseer (i.e. bad outputs to which they would give a high reward) or high-stakes errors (i.e. bad outputs which can have catastrophic effects before they are corrected by fine-tuning).
Phrased this way, I still disagree, but I think I disagree less strongly, and feel less of a need to respond to this. I care particularly much about using terms like “aligned” in consistent ways. Importantly, having powerful intent-aligned systems is much more useful than having powerful systems that just fail to kill you (e.g. because they are very conservative), and so getting to powerful aligned systems is a win-condition in the way that getting to powerful non-catastrophic systems is not.
I agree with this, though it’s unrelated to the stated motivation for that project or to its relationship to long-term risk.
Yep, I didn’t intend to imply that this was in contrast to the intention of the research. It was just on my mind as a recent architecture that I was confident we both had thought about, and so could use as a convenient example.
Counterarguing johnswentworth on RL from human feedback
johnswentworth recently wrote that “RL from human feedback is actively harmful to avoiding AI doom.” Piecing together things from his comments elsewhere. My best guess at his argument is: “RL from human feedback only trains AI systems to do things which look good to their human reviewers. Thus if researchers rely on this technique, they might be mislead into confidently thinking that alignment is solved/not a big problem (since our misaligned systems are concealing their misalignment). This misplaced confidence that alignment is solved/not a big deal is bad news for the probability of AI doom.”
I disagree with (this conception of) John’s argument; here are two counterarguments:
Whether “deceive humans” is the strategy RL agents actually learn seems like it should rely on empirical facts. John’s argument relies on the claim that AI systems trained with human feedback will probably learn to deceive their reviewers (rather than actually do a good job on the task). This seems like an empirical claim that ought to rely on facts about:
(i) the relative difficulties of (a) deceiving humans, (b) correctly performing the task, and (c) evaluating proposed solutions to the task;
(ii) how the AI system generalizes from training data (which in turn depends on the internal structure of the system).
John seems to think that intelligent systems will have a much easier time with (a) than with (b), given our competence at (c); in fact so much easier that questions of inductive bias (ii) aren’t relevant. This doesn’t seem quite so overdetermined to me.[1] John and Paul Christiano discussed the above point in this thread; they did quite an impressive job whittling their disagreement down to the crux “evaluation isn’t easier than generation, and that claim is true regardless of how good you are at evaluation until you get basically perfect at it.”
RL from human feedback might be only one ingredient in a solution to alignment. (I haven’t seen this discussed elsewhere.) I don’t think many people expect RL from human feedback to solve alignment by itself—in particular, it only works for tasks that we are able to evaluate, which don’t include all tasks that we’d want an AGI to do. One hope is that we could use RL from human feedback (or other ML techniques) to build AI assistants which make it possible to evaluate more tasks and with higher accuracy; in other words, it might be the case that our ability to evaluate proposed solutions actually scales with our AI capabilities.[2] Another possible hope is that RL from human feedback + some sort of interpretability could help make sure that our RL agents aren’t pursuing strategies that look like deception (and then hope that our interpretability is good enough that “doesn’t look like deception” is a good proxy for “not actually deceptive”). There are probably also other things I haven’t thought of which you can combine RL from human feedback with. In any case, it might be worthwhile to work on RL from human feedback even if it can’t solve alignment by itself, provided you believe it might be moving us closer to the goal, with the remaining work being done by other alignment techniques.
Both of these above counterarguments assume that things go better than the worst case. (For example, RL from human feedback really is completely useless if you’re worried that your RL agent might be a deceptively aligned mesa-optimiser.) But it can still be worthwhile to work on alignment approaches which only help in non-worst-case worlds
My main intuition here is that AI systems will try to deceive us long before they’re good at it; we’ll catch these initial attempts and give them negative feedback. Then it seems plausible to me that they’ll just learn to generalize correctly from there (and also plausible that they won’t). In other words, it’s worth trying to align AI systems this way in case alignment is easier than we expected due to good luck with inductive biases.
Of course, you need your assistants to be aligned, but you could optimistically hope that you can bootstrap up from tasks which are so easy for you to evaluate that your AIs trained from human feedback are very likely to be aligned.
I think this is mostly what we are worried about though? RL agents are mesa-optimizers already, or if they aren’t, they eventually will be if they are smart/capable enough. Deceptive alignment is the main thing we all worry about, though not the only thing. Without deception, if we are careful we can hopefully notice misalignment and course-correct, and/or use our AIs to do useful alignment work for us.
(Meta: after I made this post, I realized that what I wrote was a little confusing on this point (because I was a little confused on this point). I’ve been hoping someone would call me out on it to give me an excuse to spend time writing up a clarification. So thanks!)
So, on my understanding, there are two distinct ways you can get deceptive behavior out of an AI system:
(1) You could have trained it with a mis-specified objective function. Then your AI might start Goodharting its given reward; if this reward function was learned via human feedback, this means doing things that seem good to humans, but might not actually be good. This deceptive behavior could even arise purely “by accident,” that is, without the AI being able to grasp deception or even having a world model that includes humans. My favorite example is the one mentioned in the challenges section here—a simulated robotic hand was trained to grasp a ball, but it instead learned to appear to grasp the ball.
(2) Even if you have perfectly specified your objective function, your model might be a deceptively-aligned mesa-optimizer with a completely unrelated mesa-objective. (Aside: this is what makes mesa-optimizers terrifying to me—they imply that even if we were able to perfectly specify human values, we might still all die because the algorithms we trained to maximize human values ended up finding mis-aligned mesa-optimizers instead.)
In other words, (2) is the inner alignment failure and if you’re worried about it you think hard about the probability of mesa-optimizers arising; (1) is the outer alignment failure and if you’re worried about it I guess you argue a lot about air conditioners.
I’m pretty sure John was worried about (1), because if he were worried about (2) he would have said that all outer alignment research, not just RL from human feedback, is actively harmful to avoiding AI doom. (And FWIW this stronger claim seems super wrong to me, and I expect it also seems wrong to John and most other people.)
Thanks for the explanation (upvoted). I don’t really understand it though, it seems like a straw man. At any rate I’m not now interested in exegesis on John, I want to think about the arguments and claims in their own right.
What would you say is the main benefit from the RL from Human Feedback research so far? What would have happened if the authors had instead worked on a different project?
I feel like these questions are a little tricky to answer, so instead I’ll attempt to answer the questions “What is the case for RL from human feedback (RLFHF) helping with alignment?” and “What have we learned from RLFHF research so far?”
What is the case for RLFHF helping with alignment?
(The answer will mainly be me repeating the stuff I said in my OP, but at more length.)
The most naive case for RLFHF is “you train some RL agent to assist you, giving it positive feedback when it does stuff you like and negative feedback for stuff you don’t like. Eventually it learns to model your preferences well and is able to only do stuff you like.”
The immediate objections that come to mind are:
(1) The RL agent is really learning to do stuff that lead you to giving it positive feedback (which is an imperfect proxy for “stuff you like.”) Won’t this lead to the RL agent manipulating us/replacing us with robots that always report they’re happy/otherwise Goodharting their reward function?
(2) This can only train an RL agent to do tasks that we can evaluate. What about tasks we can’t evaluate? For example, if you tell your RL agent to write macroeconomic policy proposal, we might not be able to give it feedback on whether its proposal is good or not (because we’re not smart enough to evaluate macroeconomic policy), which sinks the entire RLFHF method.
(3) A bunch of other less central concerns that I’ll relegate to footnotes.[1][2][3]
My response to objection (1) is … well at this point I’m really getting into “repeat myself from the OP” territory. Basically, I think this is a valid objection, but
(a) if the RL agent’s reward model is very accurate, it’s not obviously true that the easiest way for it to optimize for its reward is to do deceptive/Goodhart-y stuff; this feels like it should rely on empirical facts like the ones I mentioned in the OP.
(b) even if the naive approach doesn’t work because of this objection, we might be able to do other stuff on top of RLFHF (e.g. interpretability, something else we haven’t thought of yet) to penalize Goodhart-y behavior or prevent it from arising in the first place.
The obvious counterargument here is “Look, Sam, you clearly are just not appreciating how much smarter than you a superintelligence will be. Inevitably there will be some way to Goodhart the reward function to get more reward than ‘just do what we want’ would give, and no technique you come up with of trying to penalize this behavior will stop the AI from finding and exploiting this strategy.” To which I have further responses, but I think I’ll resist going further down the conversational tree.
Objection (2) above is a good one, but seems potentially surmountable to me. Namely, it seems that there might be ways to use AI to improve our ability to evaluate things. The simplest form of this is recursive reward modelling: suppose you want to use RLFHF to train an AI to do task X but task X is difficult/expensive to evaluate; instead you break “evaluate X” into a bunch of easy-to-evaluate subtasks, and train RL agents to help with those; now you’re able to more cheaply evaluate X.
In summary, the story about how RLFHF helps with alignment is “if we’re very lucky, naive RLFHF might produced aligned agents; if we’re less lucky, RLFHF + another alignment technique might still suffice.”
What have we learned from RLFHF research so far?
Here’s some stuff that I’m aware of; probably there’s a bunch of takeaways that I’m not aware of yet.
(1) Learning to Summarize from Human Feedback didn’t do recursive reward modelling as I described it above, but it did a close cousin: instead of breaking “evaluate X” up into subtasks it broke the original task X up into a bunch of subtasks which were easier to evaluate. In this case X = “summarize a book” and the subtasks were “evaluate small chunks of text.” I’m not sure how to feel about the result—the summaries were merely okay. But if you believe RLFHF could be useful as one ingredient in alignment, then further research on whether you could get this to work would seem valuable to me.
(2) Redwood’s original research project used RLFHF (at least, I think so[4]) to train an RL agent to generate text completions in which no human was portrayed as being injured. [EDIT: since I wrote this comment Redwood’s report came out. It doesn’t look like they did the RLHF part? Rather it seems like they just did the classifier part, and generated non-injurious text completions by generating a bunch of completions and filtering out the injurious ones.] Their goal was to make the RL agent very rarely (like 10^-30 of the time) generate injurious completions. I heard through the grapevine that they were not able to get such a low error rate, which is some evidence that … something? That modelling the way humans classify things with ML is hard? That distributional shift is a big deal? I’m not sure, but whatever it is it’s probably weak evidence against the usefulness of RLFHF.
(3) On the other hand, some of the original work showed that RLFHF seems to have really good sample efficiency, e.g. the agent at the top of this page learned to do a backflip with just 900 bits of human feedback. That seems good to know, and makes me think that if value learning is going to happen at all, it will happen via RLFHF.
From your original question, it seems like what you really want to know is “how does this usefulness of this research compare to the usefulness of other alignment research?” Probably that largely depends on whether you believe the basic story for how RLFHF could be useful (as well as how valuable you think other threads of alignment research are).
Q: When we first turn on the RL agent—when it hasn’t yet received much human feedback and therefore has a very inaccurate model of human preferences—won’t the agent potentially do lots of really bad things? A: Yeah, this seems plausible, but it might not be an insurmountable challenge. For instance, we could pre-train the agent’s reward model from a bunch of training runs controlled by a human operator or a less intelligent RL agent. Or maybe the people who are studying safe exploration will come up with something useful here.
Q: What about robustness to distributional shift? That is, even if our RL agent learns a good model of human preferences under ordinary circumstances, its model might be trash once things start to get weird, e.g. once we start colonizing space. A: One thing about RLFHF is that you generally shouldn’t take the reward model offline, i.e. you should always continue giving the RL agent some amount of feedback on which the reward model continuously trains. So maybe if things get continuously weirder then our RL agents’ model of human preferences will continuously learn and we’ll be fine? Otherwise, I mainly want to ignore robustness to distributional shift because it’s an issue shared by all potential outer alignment solutions that I know of. No matter what approach to alignment you take, you need to hope that either someone else solves this issue or that it ends up not being a big deal for some reason.
What about mesa-optimizers? Like in footnote 2, this is an issue for every potential alignment solution, and I’m mainly hoping that either someone solves it or it ends up not being a big deal.
Their write up of the project, consisting of step 1 (train a classifier for text that portrays injury to humans) and step 2 (use the classifier to get an RL agent to generate non-injurious text completions), makes it sounds like they stop training the classifier once they start training the RL agent. This is like doing RLFHF where you take the reward model offline, which on my understanding tends to produce bad results. So I’m guessing that actually they never took the classifier offline, in which case what they did is just vanilla RLFHF.
Thanks for the detailed answer, I am sheepish to have prompted so much effort on your part!
I guess what I was and am thinking was something like “Of course we’ll be using human feedback in our reward signal. Big AI companies will do this by default. Obviously they’ll train it to do what they want it to do and not what they don’t want it to do. The reason we are worried about AI risk is because we think that this won’t be enough.”
To which someone might respond “But still it’s good to practice doing it now. The experience might come in handy later when we are trying to align really powerful systems.”
To which I might respond “OK, but I feel like it’s a better use of our limited research time to try to anticipate ways in which RL from human feedback could turn out to be insufficient and then do research aimed at overcoming those ways. E.g. think about inner alignment problems, think about it possibly learning to do what makes us give positive feedback rather than what we actually want, etc. Let the capabilities researchers figure out how to do RL from human feedback, since they need to figure that out anyway on the path to deploying the products they are building. Safety researchers should focus on solving the problems that we anticipate RLHF doesn’t solve by itself.”
I don’t actually think this, because I haven’t thought about this much, so I’m uncertain and mostly deferring to other’s judgment. But I’d be interested to hear your thoughts! (You’ve written so much already, no need to actually reply)
Ah cool, I see—your concern is that maybe RLHF is perhaps better left to the capabilities people, freeing up AI safety researchers to work on more neglected approaches.
That seems right to me, and I agree with it as a general heuristic! Some caveats:
I’m random person who’s been learning a lot about this stuff lately, definitely not an active researcher. So my opinions about heuristics for what to work on probably aren’t worth much.
If you think RLHF research could be very impactful for alignment, that could make up for it being less neglected than other areas.
Distinctive approaches to RLHF (like Redwood’s attempts to get their reward model’s error extremely low) might be the sorts of things that capabilities people wouldn’t try.
Finally, as a historical note, it’s hard to believe that a decade ago the state of alignment was like “holy shit, how could we possibly hard-code human values into a reward function this is gonna be impossible.” The fact that now we’re like “obviously big AI will, by default, build their AGIs with something like RLHF” is progress! And Paul’s comment elsethread is heartwarming—it implies that AI safety researchers helped accelerate the adoption of this safer-looking paradigm. In other words, if you believe RLHF helps improve our odds, then contra some recent pessimistic takes, you believe that progress is being made :)
We are moving rapidly from a world where people deploy manifestly unaligned models (where even talking about alignment barely makes sense) to people deploying models which are misaligned because (i) humans make mistakes in evaluation, (ii) there are high-stakes decisions so we can’t rely on average-case performance.
This seems like a good thing to do if you want to move on to research addressing the problems in RLHF: (i) improving the quality of the evaluations (e.g. by using AI assistance), and (ii) handling high-stakes objective misgeneralization (e.g. by adversarial training).
In addition to “doing the basic thing before the more complicated thing intended to address its failures,” it’s also the case that RLHF is a building block in the more complicated things.
I think that (a) there is a good chance that these boring approaches will work well enough to buy (a significant amount) time for humans or superhuman AIs to make progress on alignment research or coordination, (b) when they fail, there is a good chance that their failures can be productively studied and addressed.
Overall it seems to me like the story here is reasonably good and has worked out reasonably well in practice. I think RLHF is being adopted more quickly than it otherwise would, and plenty of follow-up work is being done. I think many people in labs have a better understanding of what the remaining problems in alignment are; as a result they are significantly more likely to work productively on those problems themselves or to recognize and adopt solutions from elsewhere.
OK, thanks. I’m new to this debate, I take it I’m wandering in to a discussion that may already have been had to death.
I guess I’m worried that RLHF should basically be thought of as capabilities research instead of alignment/safety research. The rationale for this would be: Big companies will do RLHF before the end by default, since their products will embarrass them otherwise. By doing RLHF now and promoting it we help these companies get products to market sooner & free up their time to focus on other capabilities research.
I agree with your claims (a) and (b) but I don’t think they undermine this skeptical take, because I think that if RLHF fails the failures will be different for really powerful systems than for dumb systems.
I think it’d be useful if you spelled out those failures you think will occur in powerful systems, that won’t occur in any intermediate system (assuming some degree of slowness sufficient to allow real world deployment of not-yet-AGI agentic models).
For example, deception: lots of parts of the animal kingdom understand the concept of “hiding” or “lying in wait to strike”, I think? It already showed up in XLand IIRC. Imagine a chatbot trying to make a sale—avoiding problematic details of the product it’s selling seems like a dominant strategy.
There are definitely scarier failure modes that show up in even-more-powerful systems (e.g. actual honest-to-goodness long-term pretending to be harmless in order to end up in situations with more resources, which will never be caught with RLHF), and I agree pure alignment researchers should be focusing on those. But the suggestion that picking the low-hanging fruit won’t build momentum for working on the hardest problems does seem wrong to me.
As another example, consider the Beijing Academy of AI’s government-academia-industry LLM partnership. When their LLMs fail to do what they want, they’ll try RLHF—and it’ll kind of work, but then it’ll fail in a bunch of situations. They’ll be forced to confront the fact that actually, objective robustness is a real thing, and start funding research/taking proto-alignment research way more seriously/as being on the critical path to useful models. Wouldn’t it be great if there were a whole literature waiting for them on all the other things that empirically go wrong with RLHF, up to and including genuine inner misalignment concerns, once they get there?
Thanks! I take the point about animals and deception.
Insofar as the pitch for RLHF is “Yes tech companies are going to do this anyway, but if we do it first then we can gain prestige, people will cite us, etc. and so people will turn to us for advice on the subject later, and then we’ll be able to warn them of the dangers” then actually that makes a lot of sense to me, thanks. I still worry that the effect size might be too small to be worth it, but idk.
I don’t think that there are failures that will occur in powerful systems that won’t occur in any intermediate system. However I’m skeptical that the failures that will occur in powerful systems will also occur in today’s systems. I must say I’m super uncertain about all of this and haven’t thought about it very much.
With that preamble aside, here is some wild speculation:
--Current systems (hopefully?) aren’t reasoning strategically about how to achieve goals & then executing on that reasoning. (You can via prompting get GPT-3 to reason strategically about how to achieve goals… but as far as we know it isn’t doing reasoning like that internally when choosing what tokens to output. Hopefully.) So, the classic worry of “the AI will realize that it needs to play nice in training so that it can do a treacherous turn later in deployment” just doesn’t apply to current systems. (Hopefully.) So if we see e.g. our current GPT-3 chatbot being deceptive about a product it is selling, we can happily train it to not do that and probably it’ll just genuinely learn to be more honest. But if it had strategic awareness and goal-directedness, it would instead learn to be less honest; it would learn to conceal its true intentions from its overseers.
--As humans grow up and learn more and (in some cases) do philosophy they undergo major shifts in how they view the world. This often causes them to change their minds about things they previously learned. For example, maybe at some point they learned to go to church because that’s what good people do because that’s what God says; later on they stop believing in God and stop going to church. And then later still they do some philosophy and adopt some weird ethical theory like utilitarianism and their behavior changes accordingly. Well, what if AIs undergo similar ontological shifts as they get smarter? Then maybe the stuff that works at one level of intelligence will stop working at another. (e.g. telling a kid that God is watching them and He says they should go to church stops working. Later when they become a utilitarian, telling them that killing civilians is murder and murder is wrong stops working too (if they are in a circumstance where the utilitarian calculus says civilian casualties are worth it for the greater good)).
I agree that “concealing intentions from overseers” might be a fairly late-game property, but it’s not totally obvious to me that it doesn’t become a problem sooner. If a chatbot realizes it’s dealing with a disagreeable person and therefore that it’s more likely to be inspected, and thus hews closer to what it thinks the true objective might be, the difference in behaviors should be pretty noticeable.
Re: ontology mismatch, this seems super likely to happen at lower levels of intelligence. E.g. I’d bet this even sometimes occurs in today’s model-based RL, as it’s trained for long enough that its world model changes. If we don’t come up with strategies for dealing with this dynamically, we aren’t going to be able to build anything with a world model that improves over time. Maybe that only happens too close to FOOM, but if you believe in a gradual-ish takeoff it seems plausible to have vanilla model-based RL work decently well before.
What it feels like to me is that we are rapidly moving from a world where people deploy manifestly unaligned models to people deploying models which are still manifestly unaligned (where even talking about alignment barely makes sense), but which are getting differentially good at human modeling and deception (and maybe at supervising other AIs, which is where the hope comes from).
I don’t think the models are misaligned because humans are making mistakes in evaluation. The models are misaligned because we have made no progress at actually pointing towards anything like human values or other concepts like corrigibility or myopia.
In other words, models are mostly misaligned because there are strong instrumental convergent incentives towards agency, and we don’t currently have any tools that allow us to shape the type of optimization that artificial systems are doing internally. Learning from human feedback seems if anything to be slightly more the kind of reward that incentivizes dangerous agency. This seems to fit neither into your (1) or (2).
Instruct-GPT is not more aligned than GPT-3. It is more capable at performing many tasks, and we have some hope that some of the tasks at which it is getting better might help with AI Alignment down the line, but right now, at the current state of the AI alignment field, the problem is not that we can’t provide good enough evaluation, or that we can only get good “average-case” performance, it’s that we have systems with random goals that are very far from human values or are capable of being reliably conservative.
And additionally to that, we now have a tool that allows any AI company to trivially train away any surface-level alignment problems, without addressing any of the actual underlying issues, creating a situation with very strong incentives towards learning human deception and manipulation, and a situation where obvious alignment failures are much less likely to surface.
My guess is you are trying to point towards a much more sophisticated and broader thing by your (2) than I interpret you as saying here, but the above is my response to my best interpretation of what you mean by (2).
In the context of my comment, this appears to be an empirical claim about GPT-3. Is that right? (Otherwise I’m not sure what you are saying.)
If so, I don’t think this is right. On typical inputs I don’t think GPT-3 is instrumentally behaving well on the training distribution because it has a model fo the data-generating process.
I think on distribution you are mostly getting good behavior mostly either by not optimizing, or by optimizing for something we want. I think to the extent it’s malign it’s because there are possible inputs on which it is optimizing for something you don’t want, but those inputs are unlike those that appear in training and you have objective misgeneralization.
In that regime, I think the on-distribution performance is probably aligned and there is not much in-principle obstruction to using adversarial training to improve the robustness of alignment.
Could you define the word “alignment” as you are using it?
I’m using roughly the definition here. I think it’s the case that there are many inputs where GPT-3 is not trying to do what you want, but Instruct-GPT is. Indeed, I think Instruct-GPT is actually mostly trying to do what you want to the extent that it is trying to do anything at all. That would lead me to say it is more “aligned.”
I agree there are subtleties like “If I ask instruct-gpt to summarize a story, is it trying to summarize the story? Or trying to use that as evidence about ‘what Paul wants’ and then do that?” And I agree there is a real sense in which it isn’t smart enough for that distinction to be consistently meaningful, and so in that sense you might say my definition of intent alignment doesn’t really apply. (I more often think about models being “benign” or “malign,” more like asking: is it trying to optimize for something despite knowing that you wouldn’t like it.) I don’t think that’s what you are talking about here though.
If you have good oversight, I think you probably get good average case alignment. That’s ultimately an empirical claim about what happens when you do SGD, but the on-paper arguments looks quite good (namely: on-distribution alignment would improve the on-distribution performance and seems easy for SGD to learn relative to the complexity of the model itself) and it appears to match the data so far to the extent we have relevant data.
You seem to be confidently stating it’s false without engaging at all with the argument in favor or presenting or engaging with any empirical evidence.
But which argument in favor did you present? You just said “the models are unaligned for these 2 reasons”, when those reasons do not seem comprehensive to me, and you did not give any justification for why those two reasons are comprehensive (or provide any links).
I tried to give a number of specific alternative reasons that do not seem to be covered by either of your two cases, and included a statement that we might disagree on definitional grounds, but that I don’t actually know what definitions you are using, and so can’t be confident that my critique makes sense.
Now that you’ve provided a definition, I still think what I said holds. My guess is there is a large inferential distance here, so I don’t think it makes sense to try to bridge that whole distance within this comment thread, though I will provide an additional round of responses.
I don’t think your definition of intent-alignment requires any unaligned system to have a model of the data-generating process, so I don’t understand the relevance of this. GPT-3 is not unaligned because it has a model of the data-generating process, and I didn’t claim that.
I did claim that neither GPT-3 nor Instruct-GPT are “trying to do what the operator wants it to do”, according to your definition, and that the primary reason for that is that in as much as its training process did produce a model that has “goals” and so can be modeled in any consequentialist terms, those “goals” do not match up with trying to be helpful to the operator. Most likely, they are a pretty messy objective we don’t really understand (which in the case of GPT-3 might be best described as “trying to generate text that in some simple latent space resembles the training distribution” and I don’t have any short description of what the “goals” of Instruct-GPT might be, though my guess is they are still pretty close to GPT-3s goals).
I don’t think we know what Instruct-GPT is “trying to do”, and it seems unlikely to me that it is “trying to do what I want”. I agree in some sense it is “more trying to do what I want”, though not in a way that feels obviously very relevant to more capable systems, and not in a way that aligns very well with your intent definition (I feel like if I had to apply your linked definition to Instruct-GPT, I would say something like “ok, seems like it isn’t intent aligned, since the system doesn’t really seem to have much of an intent. And if there is a mechanism in its inner workings that corresponds to intent, we have no idea what thing it is pointed at, so probably it isn’t pointed at the right thing”).
And in either case, even if it is the case that if you squint your eyes a lot the system is “more aligned”, this doesn’t make the sentence “many of today’s systems are aligned unless humans make mistakes in evaluation or are deployed in high-stakes environments” true. “More aligned” is not equal to “aligned”.
The correct sentence seems to me “many of those systems are still mostly unaligned, but might be slightly more aligned than previous systems, though we have some hope that with better evaluation we can push that even further, and the misalignment problems are less bad on lower-stakes problems when we can rely on average-case performance, though overall the difference in alignment between GPT and Instruct-GPT is pretty unclear and probably not very large”.
This seems wrong to me. On-distribution it seems to me that the system is usually optimizing for something that I don’t want. For example, GPT-3 primarily is trying to generate text that represents the distribution that its drawn from, which very rarely aligns with what I want (and is why prompt-engineering has such a large effect, e.g. “you are Albert Einstein” as a prefix improves performance on many tasks). Instruct-GPT does a bit better here, but probably most of its internal optimization power is still thrown at reasoning with the primary “intention” of generating text that is similar to its input distribution, since it seems unlikely that the fine-tuning completely rewrote most of these internal heuristics.
My guess is if Instruct-GPT was intent-aligned even for low-impact tasks, we could get it to be substantially more useful on many tasks. But my guess is what we currently have is mostly a model that is still primarily “trying” to generate text that is similar to its training distribution, with a few heuristics baked in in the human-feedback stage that make that text more likely to be a good fit for the question asked. In as much as the model is “trying to do something”, i.e. what most of its internal optimization power is pointed at, I am very skeptical that that is aligned with my task.
(Similarly, looking at Redwood’s recent model, it seems clear to me that they did not produce a model that “intents” to produce non-injurious completions. The model has two parts, one that is just “trying” to generate text similar to its training distribution, and a second part that is “trying” to detect whether a completion is injurious. This model seems clearly not intent-aligned, since almost none of its optimization power is going towards our target objective.)
My guess is a lot of work is done here by the term “average case alignment”, so I am not fully sure how to respond. I disagree that the on-paper argument looks quite good, though it depends a lot on how narrowly you define “on-distribution”. Given my arguments above, you must either mean something different from intent-alignment (since to me at least it seems clear that Redwood’s model is not intent-aligned), or disagree with me on whether systems like Redwood’s are intent-aligned, in which case I don’t really know how to consistently apply your intent-alignment definition.
I also feel particularly confused about the term “average case alignment”, combined with “intent-alignment”. I can ascribe goals at multiple different levels to a model, and my guess is we both agree that describing current systems as having intentions at all is kind of fraught, but in as much as a model has a coherent goal, it seems like that goal is pretty consistent between different prompts, and so I am confused why we should expect average case alignment to be very different from normal alignment. It seems that if I have a model that is trying to do something, then asking it multiple times, probably won’t make a difference to its intention (I think, I mean, again, this all feels very handwavy, which is part of the reason why it feels so wrong to me to describe current models as “aligned”).
I currently think that the main relevant similarities between Instruct-GPT and a model that is trying to kill you, are about errors of the overseer (i.e. bad outputs to which they would give a high reward) or high-stakes errors (i.e. bad outputs which can have catastrophic effects before they are corrected by fine-tuning).
I’m interested in other kinds of relevant similarities, since I think those would be exciting and productive things to research. I don’t think the framework “Instruct-GPT and GPT-3 e.g. copy patterns that they saw in the prompt, so they are ‘trying’ to predict the next word and hence are misaligned” is super useful, though I see where it’s coming from and agree that I started it by using the word “aligned”.
Relatedly, and contrary to my original comment, I do agree that there can be bad intentional behavior left over from pre-training. This is a big part what ML researchers are motivated by when they talk about improving the sample-efficiency of RLHF. I usually try to discourage people from working on this issue, because it seems like something that will predictably get better rather than worse as models improve (and I expect you are even less happy with it than I am).
I agree that there is a lot of inferential distance, and it doesn’t seem worth trying to close the gap here. I’ve tried to write down a fair amount about my views, and I’m always interested to read arguments / evidence / intuitions for more pessimistic conclusions.
I agree with this, though it’s unrelated to the stated motivation for that project or to its relationship to long-term risk.
Phrased this way, I still disagree, but I think I disagree less strongly, and feel less of a need to respond to this. I care particularly much about using terms like “aligned” in consistent ways. Importantly, having powerful intent-aligned systems is much more useful than having powerful systems that just fail to kill you (e.g. because they are very conservative), and so getting to powerful aligned systems is a win-condition in the way that getting to powerful non-catastrophic systems is not.
Yep, I didn’t intend to imply that this was in contrast to the intention of the research. It was just on my mind as a recent architecture that I was confident we both had thought about, and so could use as a convenient example.