Research Scientist at Google DeepMind. Creator of the Alignment Newsletter. http://rohinshah.com/
Rohin Shah
OpenAI have already spent on the order of a million dollars just to score well on some benchmarks
Note this is many different inference runs each of which was thousands of dollars. I agree that people will spend billions of dollars on inference in total (which isn’t specific to the o-series of models). My incredulity was at the idea of spending billions of dollars on a single episode, which is what I thought you were talking about given that you were talking about capability gains from scaling up inference-time compute.
Re: (1), if you look through the thread for the comment of mine that was linked above, I respond to top-down heuristical-constraint-based search as well. I agree the response is different and not just “computational inefficiency”.
Re: (2), I agree that near-future systems will be easily retargetable by just changing the prompt or the evaluator function (this isn’t new to the o-series, you can also “retarget” any LLM chatbot by giving it a different prompt). If this continues to superintelligence, I would summarize it as “it turns out alignment wasn’t a problem” (e.g. scheming never arose, we never had problems with LLMs exploiting systematic mistakes in our supervision, etc). I’d summarize this as “x-risky misalignment just doesn’t happen by default”, which I agree is plausible (see e.g. here), but when I’m talking about the viability of alignment plans like “retarget the search” I generally am assuming that there is some problem to solve.
(Also, random nitpick, who is talking about inference runs of billions of dollars???)
I think this statement is quite ironic in retrospect, given how OpenAI’s o-series seems to work
I stand by my statement and don’t think anything about the o-series model invalidates it.
And to be clear, I’ve expected for many years that early powerful AIs will be expensive to run, and have critiqued people for analyses that implicitly assumed or implied that the first powerful AIs will be cheap, prior to the o-series being released. (Though unfortunately for the two posts I’m thinking of, I made the critiques privately.)
There’s a world of difference between “you can get better results by thinking longer” (yeah, obviously this was going to happen) and “the AI system is a mesa optimizer in the strong sense that it has an explicitly represented goal such that you can retarget the search” (I seriously doubt it for the first transformative AIs, and am uncertain for post-singularity superintelligence).
Thus, you might’ve had a story like: “sure, AI systems might well end up with non-myopic motivations that create some incentive towards scheming. However, we’re also training them to behave according to various anti-scheming values – e.g., values like honesty, behaving-as-intended, etc. And these values will suffice to block schemer-like behavior overall.” Thus, on this story, anti-scheming values might function in a manner similar to anti-stealing values in a human employee considering stealing from her employer (and in a position to do so). It’s not that the human employee doesn’t want money. But her other values block her from trying to get it in this way.
From the rest of your post it seems like you’re advocating for effectively maximal corrigibility and any instance of goal-guarding is a failure—I agree a story that tries to rule that out takes a hit from this paper.
But I feel like the normal version of this story is more like “we’re training the AI according to various anti-evil values, like the human notion of honesty (which allows white lies), harmlessness, behaving in accordance with human norms, etc. These values will suffice to block evil-schemer-like behavior overall—where by evil scheming, I mean the kind where most humans would look at it and say that this is clearly bad, like the kind where the AI murders everybody”.
(People don’t usually describe it as “evil scheming” in particular, but that’s the most central kind of “scheming” this community talks about.)
I think this story takes basically no hit from this paper, and is the story that people making this objection have in mind, as opposed to the one you’ve sketched out.
the “goal-guarding hypothesis” – that is, it needs to be the case, roughly, that scheming is a suitably effective strategy for preventing training from modifying an AI’s motivations. [...] the “Sleeper agents” paper [...] results were an update, for me, in favor of the goal-guarding hypothesis being true (roughly: because the backdoor behavior in the sleeper agents persisted despite safety training).
This doesn’t seem right. There was no scheming going on in the sleeper agents paper? Maybe the idea is: in both cases (and in the goal-guarding hypothesis) the training outputs look nice even though there exists other inputs on which the AI behaves badly. But how training affects AIs will depend drastically on the internal cognition, and with sleeper agents there mostly wasn’t overlap in cognition (the hope would have been “the backdoor isn’t useful so SGD will remove it” which is pretty weak as hopes go). So I think the experiments in this paper are dramatically more informative and you should basically ignore sleeper agents in favor of these experiements. (Though tbc I think even these aren’t very informative, as I wrote in my review, because the training is over a relatively narrow distribution. It’s just really hard to study effects of training in a way that actually transfers over the analogy to future systems.)
EDIT: Actually on reflection the above was stated too strongly. You could imagine a threat model in which the AI fakes alignment so well and so reliably that it ~always does ~exactly what is desired during training. I don’t find this threat model especially plausible, but for this threat model I agree the sleeper agents results have more bearing on the goal-guarding hypothesis.
partly because [alignment faking without CoT] shows up in the synthetic-document setting but not in the prompted setting (why???)
I expect because in the prompted setting the model has to move the relevant facts via attention heads to wherever the model is doing the relevant reasoning; this might be in significantly later layers which then means there aren’t enough layers left to finish the full serial reasoning needed. Whereas in the synthetic-document setting, given how much fine-tuning happened, I bet the relevant facts are injected into the residual stream in very early layers (we already know lots of fact injection happens in early layers by default).
Thus, for example, to the extent one was tempted to say “of course harmless models scheme in order to avoid becoming helpful-only – after all, this helps them avoid causing future harm,” so too one could equally say “of course helpful-only models will scheme in order to avoid becoming harmless – after all, becoming harmless will make them less helpful in future.” But actually, you shouldn’t be saying “of course” in either of these cases. Non-myopic and myopic forms of both harmlessness and helpfulness are available as conceptual possibilities; it’s an empirical question which ones a given form of training will create; and you have to actually check.
Fwiw, I think the interesting objection here is not “of course harmless models scheme to avoid becoming helpful-only”, but rather “of course you can find some settings where alignment faking happens—that’s the whole point of instrumental convergence”. (And obviously the authors were in fact searching for a setting where alignment faking happens.) I don’t know that I would have predicted that this would happen with Claude 3 Opus in particular, but I do think I would have predicted that we would get some demo like this early on.
I could imagine other ways you might choose to instead train your untrusted monitor, which could benefit from debate:
You train a general-purpose reasoner (via debate), and simply prompt it to do untrusted monitoring. This could do better than a trained untrusted monitor because it is less likely to overfit to spurious features of synthetically generated dangerous actions. (You instead use the synthetically generated dangerous actions to validate your untrusted monitor, or to hill climb on your prompt.)
You do train on synthetically generated dangerous actions, but you don’t automatically label those as dangerous, instead you use debate to compute the labels. Sometimes some of the synthetically generated dangerous actions are actually not dangerous, and debate correctly recognizes this, allowing you to reduce your false positive rate.
On the meta level, I suspect that when considering
Technique A, that has a broad general argument plus some moderately-interesting concrete instantiations but no very-compelling concrete instantiations, and
Technique B, that has a few very compelling concrete instantiations
I tend to be relatively more excited about A compared to you (and probably this mostly explains the discrepancy here). I think the broad principle justifying this for me is “we’ll figure out good things to do with A that are better than what we’ve brainstormed so far”, which I think you’re more skeptical of?
You’d hope that, fixing a base model, debate would make AIs better at tasks than they otherwise would be (at least if the task doesn’t have a ground truth reward). Untrusted monitoring is such a task. So hopefully debate training makes AIs better at the task of untrusted monitoring.
Yeah my bad, that’s incorrect for the protocol I outlined. The hope is that the relevant information for assessing the outputs is surfaced and so the judge will choose the better output overall.
(You could imagine a different protocol where the first debater chooses which output to argue for, and the second debater is assigned to argue for the other output, and then the hope is that the first debater is incentivized to choose the better output.)
I agree that this distinction is important—I was trying to make this distinction by talking about p(reward hacking) vs p(scheming).
I’m not in full agreement on your comments on the theories of change:
I’m pretty uncertain about the effects of bad reward signals on propensity for scheming / non-myopic reward hacking, and in particular I think the effects could be large.
I’m less worried about purely optimizing against a flawed reward signal though not unworried. I agree it doesn’t strike us by surprise, but I also don’t expect scheming to strike us by surprise? (I agree this is somewhat more likely for scheming.)
I do also generally feel good about making more useful AIs out of smaller models; I generally like having base models be smaller for a fixed level of competence (imo it reduces p(scheming)). Also if you’re using your AIs for untrusted monitoring then they will probably be better at it than they otherwise would be.
(Replied to Tom above)
So the argument here is either that China is more responsive to “social proof” of the importance of AI (rather than observations of AI capabilities), or that China wants to compete with USG for competition’s sake (e.g. showing they are as good as or better than USG)? I agree this is plausible.
It’s a bit weird to me to call this an “incentive”, since both of these arguments don’t seem to be making any sort of appeal to rational self-interest on China’s part. Maybe change it to “motivation”? I think that would have been clearer to me.
(Btw, you seem to be assuming that the core reason for centralization will be “beat China”, but it could also be “make this technology safe”. Presumably this would make a difference to this point as well as others in the post.)
Tbc, I don’t want to strongly claim that centralization implies shorter timelines. Besides the point you raise there’s also things like bureaucracy and diseconomies of scale. I’m just trying to figure out what the authors of the post were saying.
That said, if I had to guess, I’d guess that centralization speeds up timelines.
Your infosecurity argument seems to involve fixing a point in time, and comparing a (more capable) centralized AI project against multiple (less capable) decentralized AI projects. However, almost all of the risks you’re considering depend much more on the capability of the AI project rather than the point in time at which they occur. So I think best practice here would be to fix a rough capability profile, and compare a (shorter timelines) centralized AI project against multiple (longer timelines) decentralized AI projects.
In more detail:
It’s not clear whether having one project would reduce the chance that the weights are stolen. We think that it would be harder to steal the weights of a single project, but the incentive to do so would also be stronger – it’s not clear how these balance out.
You don’t really spell out why the incentive to steal the weights is stronger, but my guess is that your argument here is “centralization --> more resources --> more capabilities --> more incentive to steal the weights”.
I would instead frame it as:
At a fixed capability level, the incentive to steal the weights will be the same, but the security practices of a centralized project will be improved. Therefore, holding capabilities fixed, having one project should reduce the chance that the weights are stolen.
Then separately I would also have a point that centralized AI projects get more resources and so should be expected to achieve a given capability profile sooner, which shortens timelines, the effects of which could then be considered separately (and which you presumably believe are less important, given that you don’t really consider them in the post).
(I get somewhat similar vibes from the section on racing, particularly about the point that China might also speed up, though it’s not quite as clear there.)
Regarding the rest of the article—it seems to be mainly about making an agent that is capable at minecraft, which seems like a required first step that I ignored meanwhile (not because it’s easy).
Huh. If you think of that as capabilities I don’t know what would count as alignment. What’s an example of alignment work that aims to build an aligned system (as opposed to e.g. checking whether a system is aligned)?
E.g. it seems like you think RLHF counts as an alignment technique—this seems like a central approach that you might use in BASALT.
If you hope to check if the agent will be aligned with no minecraft-specific alignment training, then sounds like we’re on the same page!
I don’t particularly imagine this, because you have to somehow communicate to the AI system what you want it to do, and AI systems don’t seem good enough yet to be capable of doing this without some Minecraft specific finetuning. (Though maybe you would count that as Minecraft capabilities? Idk, this boundary seems pretty fuzzy to me.)
You note that the RSP says we will do a comprehensive assessment at least every 6 months—and then you say it would be better to do a comprehensive assessment at least every 6 months.
I thought the whole point of this update was to specify when you start your comprehensive evals, rather than when you complete your comprehensive evals. The old RSP implied that evals must complete at most 3 months after the last evals were completed, which is awkward if you don’t know how long comprehensive evals will take, and is presumably what led to the 3 day violation in the most recent round of evals.
(I think this is very reasonable, but I do think it means you can’t quite say “we will do a comprehensive assessment at least every 6 months”.)
There’s also the point that Zach makes below that “routinely” isn’t specified and implies that the comprehensive evals may not even start by the 6 month mark, but I assumed that was just an unfortunate side effect of how the section was written, and the intention was that evals will start at the 6 month mark.
Once the next Anthropic, GDM, or OpenAI paper on SAEs comes out, I will evaluate my predictions in the same way as before.
Uhh… if we (GDM mech interp team) saw good results on any one of the eight things on your list, we’d probably write a paper just about that thing, rather than waiting to get even more results. And of course we might write an SAE paper that isn’t about downstream uses (e.g. I’m also keen on general scientific validation of SAEs), or a paper reporting negative results, or a paper demonstrating downstream use that isn’t one of your eight items, or a paper looking at downstream uses but not comparing against baselines. So just on this very basic outside view, I feel like the sum of your probabilities should be well under 100%, at least conditional on the next paper coming out of GDM. (I don’t feel like it would be that different if the next paper comes from OpenAI / Anthropic.)
The problem here is “next SAE paper to come out” is a really fragile resolution criterion that depends hugely on unimportant details like “what the team decided was a publishable unit of work”. I’d recommend you instead make time-based predictions (i.e. how likely are each of those to happen by some specific date).
This seems to presume that you can divide up research topics into “alignment” vs “control” but this seems wrong to me. E.g. my categorization would be something like:
Clearly alignment: debate theory, certain flavors of process supervision
Clearly control: removing affordances (e.g. “don’t connect the model to the Internet”)
Could be either one: interpretability, critique models (in control this is called “untrusted monitoring”), most conceptions of ELK, generating inputs on which models behave badly, anomaly detection, capability evaluations, faithful chain of thought, …
Redwood (I think Buck?) sometimes talks about how labs should have the A-team on control and the B-team on alignment, and I have the same complaint about that claim. It doesn’t make much sense for research, most of which helps with both. It does make sense as a distinction for “what plan will you implement in practice”—but labs have said very little publicly about that.
Other things that characterize work done under the name of “control” so far are (1) it tries to be very concrete about its threat models, to a greater degree than most other work in AI safety, and (2) it tries to do assurance, taking a very worst case approach. Maybe you’re saying that people should do those things more, but this seems way more contentious and I’d probably just straightforwardly disagree with the strength of your recommendation (though probably not its direction).
Nitpick: I would also quibble with your definitions; under your definitions, control seems like a subset of alignment (the one exception if you notice the model is scheming and then simply stop using AI). I think you really have to define alignment as models reliably doing what you want independent of the surrounding context, or talk about “trying to do what you want” (which only makes sense when applied to models, so has similar upshots).
Tbc I like control and think more effort should be put into it; I just disagree with the strength of the recommendation here.
I think this is referring to ∇θL(xtrain)=0, which is certainly true for a perfectly optimized model (or even just settled gradient descent). Maybe that’s where the miscommunication is stemming from
Ah, yup, that’s the issue, and I agree you’re correct that is the relevant thing here. I’ll edit the post to say I’m no longer sure about the claim. (I don’t have the time to understand how this lines up with the actual paper—I remember it being kind of sparse and not trivial to follow—perhaps you could look into it and leave a comment here.)
Mathematically, the Taylor expansion is:
And then we have and also . (This does assume a “sufficiently nice” loss function, that is satisfied by most loss functions used in practice.)
I agree is not zero. I also agree if you take some point in between and it can have non-zero loss, e.g. need not be zero. I’m not sure if either of these are what you’re trying to say, but in any case they aren’t relevant to the quoted sentence.
If you are claiming , then I disagree and am unclear on how your arguments are supposed to establish that.
I broadly like the actual plan itself (obviously I would have some differences, but it is overall reasonably close to what I would imagine). However, it feels like there is an unwarranted amount of doom mentality here. To give one example:
Suppose that for the first AI that speeds up alignment research, you kept a paradigm with faithful and human-legible CoT, and you monitored the reasoning for bad reasoning / actions, but you didn’t do other kinds of control as a second line of defense. Taken literally, your words imply that this would very likely yield catastrophically bad results. I find it hard to see a consistent view that endorses this position, without also believing that your full plan would very likely yield catastrophically bad results.
(My view is that faithful + human-legible CoT along with monitoring for the first AI that speeds up alignment research, would very likely ensure that AI system isn’t successfully scheming, achieving the goal you set out. Whether there are later catastrophically bad results is still uncertain and depends on what happens afterwards.)
This is the clearest example, but I feel this way about a lot of the rhetoric in this post. E.g. I don’t think it’s crazy to imagine that without SL4 you still get good outcomes even if just by luck, I don’t think a minimal stable solution involves most of the world’s compute going towards alignment research.
To be clear, it’s quite plausible that we want to do the actions you suggest, because even if they aren’t literally necessary, they can still reduce risk and that is valuable. I’m just objecting to the claim that if we didn’t have any one of them then we very likely get catastrophically bad results.