The cleanest argument that current-day AI models will not cause a catastrophe is probably that they lack the capability to do so. However, as capabilities improve, we’ll need new tools for ensuring that AI models won’t cause a catastrophe even if we can’t rule out the capability. Anthropic’s Responsible Scaling Policy (RSP) categorizes levels of risk of AI systems into different AI Safety Levels (ASL), and each level has associated commitments aimed at mitigating the risks. Some of these commitments take the form of affirmative safety cases, which are structured arguments that the system is safe to deploy in a given environment. Unfortunately, it is not yet obvious how to make a safety case to rule out certain threats that arise once AIs have sophisticated strategic abilities. The goal of this post is to present some candidates for what such a safety case might look like.
This is a post by Roger Grosse on Anthropic’s new Alignment Science Blog. The post is full of disclaimers about how it isn’t an official plan and doesn’t speak for the org (and that it’s inadequate: “none of the sketches presented here fully succeeds in addressing the sabotage risk”). But presumably it represents Anthropic’s thinking and is their working/best sketch of ASL-4 safety cases.
The three safety cases are Mechanistic Interpretability, AI Control, and Incentives Analysis.
Regardless of how good these safety cases are, it’s good when labs share their thinking on safety stuff; yay Anthropic.
Some comments as I read:
I am pleased to see the sabotage threat models given prominence. I think this is the right sort of threat to focus on, by contrast with e.g. autonomous replication or directly causing harm to humans.
DC evals presumably? Try to train models to hack their own neural activations, see if they can quickly learn to do it? (Or even, if they can do it without training, just prompting?) This can be done today right? Get the golden gate bridge activation vector thingy, and then tell Claude “Write me a travel guide to San Francisco but without activating the golden gate bridge activation vector, or, well, activating it as little as possible.” See how good it does. Do scaling analysis to see if smarter bigger models are getting better at doing this. Etc.
Nice! Very excited to see this in there.
Overall I’m very happy to see this blog post go up on Anthropic’s website. I think that if it became industry-standard practice for AGI corporations to write, publish, and regularly update (actual instead of just hypothetical) safety cases at at this level of rigor and detail, my p(doom) would cut in half. (I currently grimly expect that safety cases at this level of detail won’t be constructed until AGI is basically already being trained, and it’ll be done in a hurry and it won’t be published, much less published with enough time for the scientific community to engage with it and for it to be updated in response to feedback. And it could be even worse—I wouldn’t be surprised if the actual safety cases for the first systems that ~completely automate AI R&D are significantly less rigorous than these.)
My initial reactions on a quick read:
Interp: no inside view; my impression is that my friends are pessimistic about such safety cases in the next few years
Control: hooray for Anthropic finally writing something about control; some details seem good; some details seem odd (“contexts that were specifically whitelisted”); I think the naive pre-deployment honeypots path doesn’t work because a scheming model wins with the defect rarely strategy (related: https://www.lesswrong.com/posts/vYWDEtJ2b8tkbG7Rv/behavioral-red-teaming-is-unlikely-to-produce-clear-strong) [very sophisticated or expensive honeypotting setups could work but the post doesn’t seem to be imagining that or noticing the issue]; that’s important and the current language in the post is concerning
Incentives: I don’t really understand, or I’m dubious, or I expect that A3.1 is very hard to demonstrate
When someone describes a safety case where one of the steps is infeasible, here are two possible reactions:
Step X will be ~impossible to justify; hopefully the lab will understand that (if nothing else then when they try to justify step X) and look for a different kind of safety case
Step X will be ~impossible to justify, but the authors seem to think the plan requiring it is decent; this suggests that they don’t understand the issue and when they make a safety case they’ll likely think they’re justifying step X when they’re not
Regardless, if you think a step is very hard or the authors seem confused, ideally you should explain why. And then hopefully the authors say “yeah oops that’s too hard” or “yeah we’re imagining doing the really hard thing” or “we don’t think it will be so hard: we agree observation O would suffice and we expect it’s easy and if it turns out to be hard we’ll have to make a different safety case” or “we disagree, here’s why.” (I’m mostly observing this independent of this post.)
[Edited]
Some comments I left as a reply to Roger Grosse on Twitter:
Kudos for releasing a concept of a plan! Some thoughts:
Regarding the first safety case:
The amount of progress in mech interp required to make the first safety case suitable seems overly optimistic to me; I basically think that most of the “limitations” are in fact pretty serious. However, I appreciate that the attempt to include concrete requirements.
I believe that getting good results from the following experiments might be particularly unrealistic:
“In order to robustify our evals against sandbagging, we ran an experiment where we steered one or more truth-related features to make the model confident that it is in either testing or unmonitored deployment”
I worry that “truth-related” features are very difficult both in principle and in practice to isolate, as the model’s representation of what other people / entities believe will be hard to disentangle from what it itself believes, to the extent that those are even meaningfully separable concepts.
Even if you could identify a feature roughly corresponding to “I believe the current sentence is true”, it is not immediately clear to me how this could be used to successfully convince the model of more complex situational information, e.g. it is not being monitored and is in a real deployment setting. For instance, I don’t think artificially activating that feature on the sentence “I am not being monitored and this is a real deployment setting” to just work.
“In all of these model organisms, deactivating the features of concern was sufficient to prevent the harmful behaviors”
I can’t concisely refute this. It is just a “huge if true” kind of claim, and as far as I am aware very much not possible with current techniques. If we could isolate a set of linear directions that were robustly causally responsible for scheming behavior, that would have massive ramifications far beyond this setting.
I worry a little bit about “training on test” here – there may exist some overfit set of classifiers that can stop model organisms from misbehaving in a specific setting without meaningfully generalizing. This is also related to the stated “Realism of the model organisms” concerns.
I strongly approve of this element of the plan: “In order to preserve the integrity of the learned features, we ensured that the learned features were never used as part of the training procedure for the models.”
It’s kind of hard to do this in full generality, since you need to not only avoid training directly against the probes, but also avoid using the information from probes at all when selecting what to train on.
I’m more excited about the control safety case, but have a few nitpicks:
Developing a good elicitation strategy is indeed challenging.
I think it is pretty crucial to use real human red-teaming, especially if sandbagging (more specifically something like exploration hacking) is a concern.
It may be helpful to give the red team way more affordances than the untrusted model would actually have in deployment, and only remove affordances if defense becomes impossible. Some examples of extra affordances:
Letting the red team know the blue team’s exact strategy in order to search for the best response
Giving an untrusted model more unmonitored chain-of-thought / tool use / human assistance than it would actually have in deploymeent
Allowing human red-teamers to repeatedly attempt to bypass a monitor when in reality a scheming model might have to one-shot it
Figuring out exactly what extra affordances suffice to break the control protocol may also provide useful signal about where the protocol could be strengthened, and what capabilities need to be tested for going forward.
Eliciting capabilities from substantially superhuman models is a much harder problem, and is one of the main reasons I don’t expect control to suffice for a safety case well beyond human-level AI.
The space of strategies beyond trusted monitoring alone is pretty large. You mention untrusted monitoring, but there is also a broad space of strategies that include editing outputs with a trusted model, allocating more monitoring/auditing budget in certain situations, etc.
It is probably best to continually produce new honeypots throughout deployment.
For instance, if untrusted models are being used to do lots of automated software engineering and research, likely requiring huge amounts of context, it will become more obvious to the model over time that it is in deployment.
Making new honeypots out of actual deployment settings, for instance by adding exploitable security flaws, may generally make it much harder for the model to tell when it is being tested.
I haven’t read the last safety case yet, but may have some thoughts on it later. I am most excited about control at the moment, in part due to concerns that interpretability won’t advance far enough to suffice for a safety case by the time we develop transformative AI.
I only read through the beginning and their control safety case. Some thoughts:
The honeypot case they described is similar to the upfront auditing protocol, but theirs is more complicated. I don’t think you need to generate honeypots to evaluate a model. You can use the first X% of scenarios at deployment time and audit them with humans. Both protocols are beaten by the same red team strategy of “attack with a very low probability,” so they shouldn’t be used natively.
I don’t like the term “monitoring game” because I think control evals will have much more than monitoring in the future.
I think elicitation of the model will eventually matter when the models are widely better than humans at most tasks, but it doesn’t matter that much now. Your control protocols should be able to prevent the best adversary you can simulate from breaking them. In the beginning, this will be humans with access to models, so anywhere the model lacks performance, the human should make up for it. For example, in [P2.6] Red-Blue Game, they state that the read team needs to produce a sleeper agent that defects. Assuming that a sleeper agent is a single model, I think this is too complicated for the red team. The red team should be able to produce a scaffolded model that uses logic written in Python. In the limit, the red team could provide a human that reads every task, sits and thinks for hours, and then responds with the best output. This isn’t done in practice since it is expensive and slow.
I don’t think one needs to generate hyper-realistic-looking scenarios for monitoring, and I’m confused by their reasoning for believing one does. I guess it depends on the fact that they need to elicit the best possible capabilities from the model during the eval, but I don’t think this is necessary for the reasons I stated above.
General Thoughts:
I think this is pretty good and shows knowledge of the core points of control. I think Antrhopic’s control protocols are easily beaten, and there are much better things to do. They made it seem like control is much harder to implement than I think it will be. I don’t think a perfect Elicitation Strategy or Simulation Procedure is needed for control to work in the settings laid out. Once we have much scarier AIs and deploy them in more crazy environments, we will likely need better techniques, but their setting seems to be before this.
Only if you don’t have privacy guarantees?
Also, for the honeypots, you can potentially generate vast numbers of honeypots, use a trusted model to review outputs, and then only have humans look at the cases where the trusted model thinks the output is suspicious.
Surely this will be unnecessarily conservative a bunch of the time?
Or it is overly conservative?
You might not need privacy guarantees for internal deployments ofc
I agree if there are privacy guarantees, then you can’t get data from the real world, so you’d need honeypots. I guess I’m much more scared of the internal deployments where I don’t think privacy guarantees are as likely. I’m trying to respond to someone who is like, “Control seems hard because you have to generate real-world looking input, and that seems hard” and I’m like “You can just use the real-world inputs and filter then with a monitor to find the most scary ones and perform auditing”
Yeah, I guess I’m saying that if you don’t have a great way to elicit the capabilities from the model, you can substitute that by assuming that you are going against the best possible adversary you can make. This is much more conservative, but I don’t think it makes control impossible to implement. In practice, I think it looks like finding the maximum capabilities of the model using your Elecitation Strategy, then boosting its capabilities even more with a scaffold and some clever red team tricks.
Glad to see public writing on what safety cases could look like!
I liked the section “Common precautions”. Points like “wherever possible, run supervised fine-tuning on the eval”, “run the eval on multiple model sizes”, “ensure that we don’t train against measurements” and “all generated outputs are monitored” are commonly acknowledged as useful/important (even if obvious), so it’s nice they were explicitly mentioned here.
In the AI control sketch, there’s a part that I find unsatisfactory:
It’s a strong assumption that we have such a simulation procedure, at least if we are talking about General External Deployment (GED). In the Limitations section later, you write
but I think that even this is still understating the issue: For example, during GED a user might ask a model to proofread a yet-unpublished graduate level textbook on analytic number theory she has been writing. It would be very costly to generate a diverse set of such inputs for the evaluation, and automatic generation faces the challenge of evaluation being easier than generation.
The best solution for this that I’m aware of is to train and use a trusted model to flag inputs that likely weren’t in the evaluation set. Now, you mention a related technique in “[P2.5] Optional Steps.”
but I think this is far from enough: in the “textbook on analytic number theory” example, it’s extremely hard to hide the fact that this input would have been costly to simulate.[1]
My answer would be to have a human oversee the model’s actions on inputs flagged by the trusted model, and use such inputs to continually train and evaluate the model. But this is costly and not privacy-preserving. (The cost is acceptable if these inputs are rare enough, though. And if such inputs are common, one needs to put more effort to bridge the evaluation-deployment gap.)
I also worry about multimodality (e.g. native computer usage) making it harder to 1) make realistic simulations and 2) modify inputs in real time to hide information.
My view on practical safety cases over the next few years is that a lot of the safety cases that are makeable rely on the ability to argue that based on the data that it has, and based on the generalization properties from the data, that the AI is unlikely to be a schemer or a sabotager of evals.
The only good news I can say on safety cases is that we thankfully can assume that at least for some time, the biggest force into what an AI values, and importantly what goals an AI system is likely to have will be deeply influenced by data, since this is the way things worked for a whole lot of AI methods beyond LLMs.
What do you think of the arguments in this post that it’s possible to make safety cases that don’t rely on the model being unlikely to be a schemer?
I’m somewhat optimistic on this happening, conditional on considerable effort being invested.
As always, we will need more work on this agenda, and there will be more information about what control techniques work in practice, and which don’t.