Scalable oversight as a quantitative rather than qualitative problem

[Many of these ideas were developed in conversation with Ryan Greenblatt and Ansh Radhakrishnan; a lot of this isn’t original but I haven’t seen it written up]

A lot of the time when people talk about scalable oversight, which I’ll define for the moment as “the problem of developing techniques that allow you to assess whether an action taken or proposed by an AI is good (oversee it), in a way that works even if the AI is very smart and knowledgeable”, the main application they describe is the following situation: you have an extremely powerful AI that’s taking actions that might have subtle properties that humans are unable to understand, and you want to be oversee them (e.g. to train it, or to determine whether the actions will have catastrophic consequences).

I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it’s missing the more central application of these techniques: situations where the AIs are taking many actions, where humans would eventually understand any particular action if they spent a whole lot of time investigating it, but where that amount of time taken to oversee any action is prohibitively large. In such cases, the point of scalable oversight is to allow them to oversee actions at a much lower cost in terms of human time—to push out the Pareto frontier of oversight quality vs cost.

Of course, when you’re thinking about arbitrarily subtle actions, this distinction doesn’t matter: there are actions subtle enough that overseeing them would take more than the whole budget of the AI project, and for those actions there’s no difference between “we can’t afford to oversee it” and “we’d never be able to oversee it even with infinite budget”.

But I think that this distinction will be very important in practice, and I often think that people make reasoning errors when thinking about scalable oversight due to neglecting it. The basic reason to think that it’s important is that we probably need to oversee many model actions, and so we probably need to decide on a per-action oversight budget (e.g. the amount of human labor spent on ) that will determine what subtlety of actions we will or won’t be able to resolve, but we probably won’t choose a per-action oversight budget that is close to the whole AI project budget.

Another reason this feels important to me is that, as discussed here, I think a lot of AI risk comes from early transformative models that aren’t that much smarter than humans; when thinking through the risk associated with them, it feels pretty implausible that a dangerous action of theirs would be unoverseeable if the whole AI project’s budget was spent on overseeing it, but it feel very plausible that with the practical per-action oversight budget, the actions will be too subtle to oversee properly.

What implications does this have for research:

  • One question that seems extremely important to me as a result of all this is: what sample efficiency should we expect when training very powerful models? This is relevant because it’s a key part of forecasting the per-action oversight budget. In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight. I’d love to see some careful research analyzing the tradeoffs between label quality and quantity, and how these change with model scale. (Ansh has done a bit of investigation along these lines.) (I’m using “labels” to mean the output of the oversight process: it might be rewards or preferences between proposed actions.)

  • When imagining scalable oversight in the future, I don’t think people should entirely be imagining cases where the researchers are fundamentally unable to understand what’s going on or to ever do any labeling themselves, I think they should be imagining cases where the researchers can do a couple of labels themselves but it’s hard and expensive for them. This is kind of like the situation researchers face when developing jailbreaks for models: they can do it, but it takes quite a while to develop each one, so they e.g. can’t have a quick feedback loop where they change the policy that they’re overseeing and then quickly oversee a bunch of new random actions that it took.

  • It might be interesting to see research on how human oversight quality changes as a function of resource allocation to it, especially in key domains like auditing code for backdoors (as in our paper) or cases where we expect it to be particularly expensive to produce high-quality oversight.

Another minor distinction here: I think it’s a bit useful to distinguish between cases where your oversight process isn’t good enough because (for a fixed level of quality) it has overly low throughput (that is, given your fixed budget, you can’t make enough labels) and because it has overly high latency (that is, your labels arrive too late). Latency might be a problem for a few reasons. One is that if you were using your oversight to review potentially catastrophic actions, your AIs agents might be waiting around for the result of the review, and if you’re too slow, they might be substantially less efficient. Another is that online training works worse if the labels have more latency.