Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem
Thanks to Roger Grosse, Cem Anil, Sam Bowman, Tamera Lanham, and Mrinank Sharma for helpful discussion and comments on drafts of this post.
Throughout this post, “I” refers to Ansh (Buck, Ryan, and Fabien helped substantially with the drafting and revising of this post, however).
Two approaches to addressing weak supervision
A key challenge for adequate supervision of future AI systems is the possibility that they’ll be more capable than their human overseers. Modern machine learning, particularly supervised learning, relies heavily on the labeler(s) being more capable than the model attempting to learn to predict labels. We shouldn’t expect this to always work well when the model is more capable than the labeler,[1] and this problem also gets worse with scale – as the AI systems being supervised become even more capable, naive supervision becomes even less effective.
One approach to solving this problem is to try to make the supervision signal stronger, such that we return to the “normal ML” regime. These scalable oversight approaches aim to amplify the overseers of an AI system such that they are more capable than the system itself. It’s also crucial for this amplification to persist as the underlying system gets stronger. This is frequently accomplished by using the system being supervised as a part of a more complex oversight process, such as by forcing it to argue against another instance of itself, with the additional hope that verification is generally easier than generation.
Another approach is to make the strong student (the AI system) generalize correctly from the imperfect labels provided by the weak teacher. The hope for these weak-to-strong generalization techniques is that we can do better than naively relying on unreliable feedback from a weak overseer and instead access the latent, greater, capabilities that our AI system has, perhaps by a simple modification of the training objective.
So, I think of these as two orthogonal approaches to the same problem: improving how well we can train models to perform well in cases where we have trouble evaluating their labels. Scalable oversight just aims to increase the strength of the overseer, such that it becomes stronger than the system being overseen, whereas weak-to-strong generalization tries to ensure that the system generalizes appropriately from the supervision signal of a weak overseer.
I think that researchers should just think of these as the same research direction. They should freely mix and match between the two approaches when developing techniques. And when developing techniques that only use one of these approaches, they should still compare to baselines that use the other (or a hybrid).
(There are some practical reasons why these approaches generally haven’t been unified in the past. In particular, for scalable oversight research to be interesting, you need your weaker models to be competent enough to follow basic instructions, while generalization research is most interesting when you have a large gap in model capability. But at the moment, models are only barely capable enough for scalable oversight techniques to work. So you can’t have a large gap in model capability where the less-capable model is able to participate in scalable oversight. On the other hand, the OpenAI paper uses a GPT-2-compute-equivalent model as a weak overseer, which has a big gap to GPT-4 but is way below the capability required for scalable oversight techniques to do anything. For this reason, the two approaches should still probably be investigated in somewhat different settings for the moment. But it will be important to consider them as unified at the point when models are existentially dangerous.)
Here are some examples of hybrid protocols incorporating weak-to-strong techniques and scalable oversight schemes:
Collect preference comparisons from a language model and a human rater, but allow the rater to ask the model to critique both of its responses, to better point out flaws in the responses. The human rater selects their preferred response, as well as their confidence in their selection. Train a reward/preference model on the comparisons where the human rater was confident in their selection, or on all comparisons but with an auxiliary loss to encourage high confidence in its predictions.
Have a human H decompose a task into subtasks and delegate the subtasks to instances of a language model W. We can call this entire system Amp(H, W), where “Amp” stands for amplified. A stronger language model S can be trained to imitate Amp(H, W), where we can additionally add noise to the labels in the hopes that S learns not simply to simulate Amp(H, W) and instead learns something more robust. This approach combines elements of IDA and a method described in Appendix B of OpenAI’s recent weak-to-strong paper.
Upshot
The main upshot of this perspective is that we can evaluate techniques for both of these approaches in the same manner. When making a safety case, taking cues from Redwood Research’s recent work on AI control, I prefer to assume two fundamentally different settings where we assess the quality of a given protocol:
No scheming
Here, we assume that even our most capable models are not scheming against us (e.g. are not deceptively aligned, or otherwise attempting to confound our attempts to evaluate and control them).
The evaluations here should just look like sandwiching experiments, where we hold out ground truth for a certain domain and otherwise go no-holds-barred on trying to get maximal performance on that task. Classic weak-to-strong techniques and scalable oversight techniques seem directly comparable here, and it’s plausible the best protocols will incorporate elements of both.
When running experiments now, we can use a weaker LLM as a proxy for a human by allowing for training this weaker LLM on some held-out IID ground truth labels. The experiment doesn’t allow for other models to be trained on IID ground truth labels. This is the approach used for the weak-to-strong generalization methodology.
Scheming
Here, we assume that our most capable, untrusted models, are scheming against us, in a way that is maximally damaging to our chosen technique.
Prior discussion of what this evaluation should look like is contained in this post about Meta-Level Adversarial Evaluations of Oversight Protocols – the form of evaluation described applies to both weak-to-strong generalization and scalable oversight techniques, though it’s less clear that it’s “fair” to evaluate all weak-to-strong techniques in this manner.
The basic idea is to allow a red-team to produce copies of the untrusted model that are attempting to undermine a protocol submitted by a blue-team.
The blue-team is then required to use the models poisoned by the red-team and demonstrate that their protocol remains performant under these conditions, as a conservative measure of the protocol’s adequacy if scheming is assumed.
This form of evaluation is probably overly conservative in assessing the robustness of many weak-to-strong techniques to scheming. For example, some weak-to-strong techniques rely upon analyzing model internals, which means that the red-team has to be allowed to shape the untrusted model’s internals to be able to do anything at all. But this seems pretty unlikely to be representative of what scheming models are capable of, so these protocols will appear much less robust than they probably are.
Additionally, control evaluations assume that we don’t make much progress on understanding inductive biases of model training and generalization before attempting to establish a safety case for a given model, which is fairly antithetical to the entire weak-to-strong research direction.
Overall, it currently seems unclear to me how we can gain significant confidence that weak-to-strong techniques will work on scheming models, though I think this isn’t fatal.
Based on how a given technique does in both of these settings and depending on how we weigh the importance of these two settings, we can compare different generalization and oversight protocols on equal footing and attempt to choose the protocol that gives us the best tradeoff between usefulness and safety.
There might be some easy-to-evaluate domains that are similar to the domains we care about
Imagine that there’s a wide pool of domains in which we’d like our AI systems to act, some of which we are capable of doing ourselves, others which we can evaluate the results of but cannot perform, and yet others which we cannot perform or accurately evaluate.
One natural way to think about all of these domains is to order them in terms of difficulty – the domains that contain tasks we can do are the easiest, the domains we can evaluate but can’t perform in are harder, and the domains we can’t even evaluate are the hardest.
I think this isn’t quite the right way to think about this, though, since it’s not clear that the domains that we can currently evaluate or perform in are systematically easier than the ones we can’t. For example, it seems quite plausible that the domain of designing secure cryptography protocols [2] is just as difficult as the domain of “TAI-safety” (e.g., alignment research), but that human civilization has just ended up making more progress on secure cryptography protocols because of contingent factors unrelated to the relative difficulty of the domains, such as simply more cumulative labor having gone into cryptography thus far than alignment. In addition to having easy-to-evaluate domains that are just as difficult, we can probably find domains that are generally reasonably similar to the domains we care about in other key ways.
I think people might imagine making progress on getting AI systems to productively work on TAI-safety by training them with the weak supervision of humans (or humans with weaker-AI assistance) and then hoping to get good generalization. Instead, we should imagine supervising AI systems on challenging domains with a strong supervision signal from humans and then hope for domain generalization to the domain of TAI-safety instead of difficulty generalization. My intuition is that this is likely to be easier,[3] and quite likely to be sufficient to solve the majority of the challenges on our road to establishing the safety of TAI systems.
In addition to using these domains for training with domain generalization, these easy-to-evaluate domains can also be used to run sandwiching evaluations, since we have ground truth for them. This means that we can potentially run sandwiching evaluations on domains that are quite similar to the domains we care about. These evaluations can be run using the exact same AIs we plan on deploying to avoid the need for extrapolation. (However, note that these easy-to-evaluate domains will probably be possible for AIs to discriminate from the domains we actually care about. Thus, scheming AIs could trick these sandwiching evaluations by playing nice on these domains while causing problems in practice.)
To be clear, I think that eventually, we will need difficulty generalization as well as domain generalization – I just think it’s likely that this will be needed after TAI, so it might only be something that we have to confront after we’ve already made a lot of safety and alignment research progress.
- ^
This might still work well if the labeler only provides labels on some small subset of the distribution, such as on data points where they are very confident. Additionally, it could be easier to learn the underlying structure of the data rather than the labeler’s biases. Thanks to Roger Grosse for making these points.
- ^
I am not very familiar with this topic, so this may be off-base – it does seem to have a lot of similar properties to what we’d need to make a safety case for TAI since we’ll need similar levels of adversarial robustness.
- ^
This is far from guaranteed, to be clear, but when I try to imagine what we’ll need to safely deploy TAI, it looks more like generalization from cryptography to another field of computer science than from cryptography to population ethics, where I’d probably agree that domain generalization seems harder than difficulty generalization.
- The case for ensuring that powerful AIs are controlled by 24 Jan 2024 16:11 UTC; 258 points) (
- What’s important in “AI for epistemics”? by 24 Aug 2024 1:27 UTC; 66 points) (EA Forum;
- Measurement tampering detection as a special case of weak-to-strong generalization by 23 Dec 2023 0:05 UTC; 57 points) (
- What’s the theory of impact for activation vectors? by 11 Feb 2024 7:34 UTC; 57 points) (
- MATS AI Safety Strategy Curriculum v2 by 7 Oct 2024 22:44 UTC; 42 points) (
- What’s important in “AI for epistemics”? by 24 Aug 2024 1:27 UTC; 41 points) (
- AI alignment as a translation problem by 5 Feb 2024 14:14 UTC; 22 points) (
- 3 Oct 2024 23:31 UTC; 13 points) 's comment on Dialogue introduction to Singular Learning Theory by (
- A Review of Weak to Strong Generalization [AI Safety Camp] by 7 Mar 2024 17:16 UTC; 13 points) (
- 21 Dec 2023 18:34 UTC; 10 points) 's comment on AI #43: Functional Discoveries by (
- 18 Dec 2023 3:50 UTC; 7 points) 's comment on faul_sname’s Shortform by (
- Slowed ASI—a possible technical strategy for alignment by 14 Jun 2024 0:57 UTC; 5 points) (
- AI alignment as a translation problem by 5 Feb 2024 14:14 UTC; 3 points) (EA Forum;
- 2 Jan 2024 20:15 UTC; 2 points) 's comment on Modulating sycophancy in an RLHF model via activation steering by (
Jan Leike makes some very similar points in his post here.
Another important point on this topic is that I expect it’s impossible to produce weak-to-strong generalization techniques that look good according to meta-level adversarial evaluations, while I expect that some scalable oversight techniques will look good by that standard. And so it currently seems to me that scalable-oversight-style techniques are a more reliable response to the problem “your oversight performs worse than you expected, because your AIs are intentionally subverting the oversight techniques whenever they think you won’t be able to evaluate that they’re doing so”.
I think we should allow the strong system to use the Scientific Method. So, if it has multiple incompatible hypotheses about what the weaker system wants and why that it needs to distinguish between, then it can do an experiment: devise useful new test cases that a) should help it distinguish between the hypotheses, and b) it believes are simple enough for the weak system to label well.
Clearly to do this, it first needs to be able to determine this fact about itself, that it’s uncertain — the most obvious way to do this for an LLM would be an ensemble of models, but hopefully there are less expensive approaches to determining uncertainty (though those would need to be applied to steps in the model’s CoT reasoning about of the causes of labels from the weaker model, not just to factual questions,)
Then once the experiment is done, we need it to be able to do an approximate-Bayesian update on the results: many training methods implement approximate Bayesian inference, so this part seems easy enough.
The exciting thing about this is it lets us do “mistake theory”: e.g. the strong student is highly sure that the weak teacher will prefer response B over response A to a question because response B skillfully uses subtle flattery (or some other rhetorical device) while A does not. Using some combination of its detailed knowledge of human values and preferences, and/or asking questions of the weaker teacher, and/or debate, and/or constitutional AI, it can identify flattery as a technique for inducing a mistake from the weaker teacher, not a valid way of actually making the answer better, and thus stop doing this, even though it works (unlike RL).