I do AI Alignment research. Currently at METR, but previously at: Redwood Research, UC Berkeley, Good Judgment Project.
I’m also a part-time fund manager for the LTFF.
Obligatory research billboard website: https://chanlawrence.me/
I do AI Alignment research. Currently at METR, but previously at: Redwood Research, UC Berkeley, Good Judgment Project.
I’m also a part-time fund manager for the LTFF.
Obligatory research billboard website: https://chanlawrence.me/
Otherwise, we could easily in the future release a model that is actually (without loss of generality) High in Cybersecurity or Model Autonomy, or much stronger at assisting with AI R&D, with only modest adjustments, without realizing that we are doing this. That could be a large or even fatal mistake, especially if circumstances would not allow the mistake to be taken back. We need to fix this.
[..]
This is a lower bound, not an upper bound. But what you need, when determining whether a model is safe, is an upper bound! So what do we do?
Part of the problem is the classic problem with model evaluations: elicitation efforts, by default, only ever provide existence proofs and rarely if ever provide completeness proofs. A prompt that causes the model to achieve a task provides strong evidence of model capability, but the space of reasonable prompts is far too vast to search exhaustively to truly demonstrate mode incapability. Model incapability arguments generally rely on an implicit “we’ve tried as hard at elicitation as would be feasible post deployment”, but this is almost certainly not going to be the case, given the scale of pre-deployment evaluations vs post-deployment use cases.
The way you get a reasonable upper bound pre-deployment is by providing pre-deployment evaluators with some advantage over end-users, for example by using a model that’s not refusal trained or by allowing for small amounts of finetuning. OpenAI did do this in their original preparedness team bio evals; specifically, they provided experts with non—refusal fine-tuned models. But it’s quite rare to see substantial advantages given to pre-deployment evaluators for a variety of practical and economic reasons, and in-house usage likely predates pre-deployment capability/safety evaluations anyways.
Re: the METR evaluations on o1.
We’ll be releasing more details of our evaluations of the o1 model we evaluated, in the same style of our blog posts for o1-preview and Claude 3.5 Sonnet (Old). This includes both more details on the general autonomy capability evaluations as well as AI R&D results on RE-Bench.
Whereas the METR evaluation, presumably using final o1, was rather scary.
[..]
From the performance they got, I assume they were working with the full o1, but from the wording it is unclear that they got access to o1 pro?
Our evaluations were not on the released o1 (nor o1-pro); instead, we were provided with an earlier checkpoint of o1 (this is in the system card as well). You’re correct that we were working with a variant of o1 and not o1-mini, though.
If 70% of all observed failures are essentially spurious, then removing even some of those would be a big leap – and if you don’t even know how the tool-use formats work and that’s causing the failures, then that’s super easy to fix.
While I agree with the overall point (o1 is a very capable model whose capabilities are hard to upper bound), our criteria for “spurious” is rather broad, and includes many issues that we don’t expect to be super easy to fix with only small scaffolding changes. In experiments with previous models, I’d say 50% of issues we classify as spurious are fixable with small amounts of effort.
Which is all to say, this may look disappointing, but it is still a rather big jump after even minor tuning
Worth noting that this was similar to our experiences w/ o1-preview, where we saw substantial improvements on agentic performance with only a few days of human effort.
I am worried that issues of this type will cause systematic underestimates of the agent capabilities of new models that are tested, potentially quite large underestimates.
Broadly agree with this point—while we haven’t seen groundbreaking advancements due to better scaffolding, there have been substantial improvements to o1-preview’s coding abilities post-release via agent scaffolds such as AIDE. I (personally) expect to see comparable increases for o1 and o1-pro in the coming months.
This is really good, thanks so much for writing it!
I’ve never heard of Whisper or Eleven labs until today, and I’m excited to try them out.
Yeah, this has been my experience using Grammarly pro as well.
I’m not disputing that they were trained with next token prediction log loss (if you read the tech reports they claim to do exactly this) — I’m just disputing the “on the internet” part, due to the use of synthetic data and private instruction following examples.
I mean, we don’t know all the details, but Qwen2 was explicitly trained on synthetic data from Qwen1.5 + “high-quality multi-task instruction data”. I wouldn’t be surprised if the same were true of Qwen 1.5.
From the Qwen2 report:
Quality Enhancement The filtering algorithm has been refined with additional heuristic and modelbased methods, including the use of the Qwen models to filter out low-quality data. Moreover, these
models are utilized to synthesize high-quality pre-training data. (Page 5)
[...]
Similar to previous Qwen models, high-quality multi-task instruction data is integrated into the
Qwen2 pre-training process to enhance in-context learning and instruction-following abilities.
Similarly, Gemma 2 had its pretraining corpus filtered to remove “unwanted or unsafe utterances”. From the Gemma 2 tech report:
We use the same data filtering techniques as Gemma 1. Specifically, we filter the pretraining dataset to reduce the risk of unwanted or unsafe utterances, filter out certain personal information or other sensitive data, decontaminate evaluation sets from our pre-training data mixture, and reduce the risk of recitation by minimizing the proliferation of sensitive outputs. (Page 3)
[...]
We undertook considerable safety filtering of our pre-training data to reduce the likelihood of our
pre-trained and fine-tuned checkpoints producing harmful content. (page 10)
After thinking about it more, I think the LLaMA 1 refusals strongly suggest that this is an artefact of training data.So I’ve unendorsed the comment above.
It’s still worth noting that modern models generally have filtered pre-training datasets (if not wholely synthetic or explicitly instruction following datasets), and it’s plausible to me that this (on top of ChatGPT contamination) is a large part of why we see much better instruction following/more eloquent refusals in modern base models.
It’s worth noting that there’s reasons to expect the “base models” of both Gemma2 and Qwen 1.5 to demonstrate refusals—neither is trained on unfilted webtext.
We don’t know what 1.5 was trained on, but we do know that Qwen2′s pretraining data both contains synthetic data generated by Qwen1.5, and was filtered using Qwen1.5 models. Notably, its pretraining data explicitly includes “high-quality multi-task instruction data”! From the Qwen2 report:
Quality Enhancement The filtering algorithm has been refined with additional heuristic and modelbased methods, including the use of the Qwen models to filter out low-quality data. Moreover, these
models are utilized to synthesize high-quality pre-training data. (Page 5)
[...]
Similar to previous Qwen models, high-quality multi-task instruction data is integrated into the
Qwen2 pre-training process to enhance in-context learning and instruction-following abilities.
I think this had a huge effect on Qwen2: Qwen2 is able to reliably follow both the Qwen1.5 chat template (as you note) as well as the “User: {Prompt}\n\nAssistant: ” template. This is also reflected in their high standardized benchmark scores—the “base” models do comparably to the instruction finetuned ones! In other words, Qwen2 “base” models are pretty far from traditional base models a la GPT-2 or Pythia as a result of explicit choices made when generating their pretraining data and this explains its propensity for refusals. I wouldn’t be surprised if the same were true of the 1.5 models.
I think the Gemma 2 base models were not trained on synthetic data from larger models but its pretraining dataset was also filtered to remove “unwanted or unsafe utterances”. From the Gemma 2 tech report:
We use the same data filtering techniques as Gemma 1. Specifically, we filter the pretraining dataset to reduce the risk of unwanted or unsafe utterances, filter out certain personal information or other sensitive data, decontaminate evaluation sets from our pre-training data mixture, and reduce the risk of recitation by minimizing the proliferation of sensitive outputs. (Page 3)
[...]
We undertook considerable safety filtering of our pre-training data to reduce the likelihood of our
pre-trained and fine-tuned checkpoints producing harmful content. (page 10)
My guess is this filtering explains why the model refuses, moreso than (and in addition to?) ChatGPT contamination. Once you remove all the “unsafe completions”
I don’t know what’s going on with LLaMA 1, though.
I’m down.
Ah, you’re correct, it’s from the original instructGPT release in Jan 2022:
https://openai.com/index/instruction-following/
(The Anthropic paper I cited predates ChatGPT by 7 months)
Pretty sure Anthropic’s early assistant stuff used the word this way too: See e.g. Bai et al https://arxiv.org/abs/2204.05862
But yes, people complained about it a lot at the time
Thanks for the summaries, I found them quite useful and they’ve caused me to probably read some of these books soon. The following ones are both new to me and seem worth thinking more about:
You should judge a person’s performance based on the performance of the ideal person that would hold their position
Document every task you do more than once, as soon as you do it the second time.
Fun is important. (yes, really)
People should know the purpose of the organization (specifically, being able to recite a clear mission statement)
“I’m giving you these comments because I have very high expectations and I know that you can reach them”
A question I had while reading your notes—it seems like people fail at implementing many best practices not because they don’t think the practices are good, but because of a lack of capacity. For example, there’s an entire cluster of that basically boil down to “people do better with fast feedback”:
After a task is delegated, make sure that it’s progressing as intended.
After a task is completed (or failed), keep people accountable.
Make sure to check in on goals in regular time intervals.
Provide positive reinforcement immediately.
Provide negative feedback immediately.
These require that managers be very attentive to the going-ons and constantly on top of the state—but when there are other priorities, this might be pushed back. Do the books also talk about what not to do, such that you’ll have the slack to implement best practices?
Also, a typo:
Use OKRs (objectives and key results) and check if you’re meeting them regularly. Switch them up often to avoid goodhearting.
goodhearting → Goodharting
Thanks for writing this!
I think that phased testing should be used during frontier model training runs. By this, I mean a testing approach which starts off with extremely low surface area tests, and gradually increases surface area. This makes it easy to notice sudden capability gains while decreasing the likelihood that the model takes over.
I actually think the proposal is more general than just for preventing AI escapes during diverse evals—you want to start with low surface area tests because they’re cheaper anyways, and you can use the performance on low surface area tests to decide if you want to do more thorough evals.
I imagine a proper approach is something like:
Very cheap qualification evals, such that if the model can’t pass them (and isn’t sandbagging) it is very unlikely to have had a capabilities jump.
Very low-surface area safety evals—does the model capabilities imply that it’s not safe to do the thorough evals?
Very thorough evals for measurement—what it says on the tin.
You’d want to run 1 + 2 all the time, and begin running 3 once the model passes 1 until it passes 2.
Very cool work; I’m glad it was done.
That being said, I agree with Fabien that the title is a bit overstated, insofar as it’s about your results in particular::
Thus, fine-tuned performance provides very little information about the best performance that would be achieved by a large number of actors fine-tuning models with random prompting schemes in parallel.
It’s a general fact of ML that small changes in finetuning setup can greatly affect performance if you’re not careful. In particular, it seems likely to me that the empirical details that Fabien asks for may affect your results. But this has little to do with formatting, and much more to deal with the intrinsic difficulty of finetuning LLMs properly.
As shown in Fabien’s password experiments, there are many ways to mess up on finetuning (including by having a bad seed), and different finetuning techniques are likely to lead to different levels of performance. (And the problem gets worse as you start using RL and not just SFT) So it’s worth being very careful on claiming that the results of any particular finetuning run upper bounds model capabilities. But it’s still plausible that trying very hard on finetuning elicits capabilities more efficiently than trying very hard on prompting, for example, which I think is closer to what people mean when they say that finetuning is an upper bound on model capabilities.
Good work, I’m glad that people are exploring this empirically.
That being said, I’m not sure that these results tell us very much about whether or not the MCIS theory is correct. In fact, something like your results should hold as long as the following facts are true (even without superposition):
Correct behavior: The model behavior is correct on distribution, and the correct behavior isn’t super sensitive to many small variations to the input.
Linear feature representations: The model encodes information along particular directions, and “reads-off” the information along these directions when deciding what to do.
If these are true, then I think the results you get follow:
Activation plateaus: If the model’s behavior changes a lot for actual on-distribution examples, then it’s probably wrong, because there’s lots of similar seeming examples (which won’t lead to exactly the same activation, but will lead to similar ones) where the model should behave similarly. For example, given a fixed MMLU problem and a few different sets of 5-shot examples, the activations will likely be close but won’t be the same, (as the inputs are similar and the relevant information to locating the task should be the same). But if the model performs uses the 5-shot examples to get the correct answer, its logits can’t change too much as a function of the inputs.
In general, we’d expect to see plateaus around any real examples, because the correct behavior doesn’t change that much as a function of small variations to the input, and the model performs well. In contrast, for activations that are very off distribution for the model, there is no real reason for the model to remain consistent across small perturbations.
Sensitive directions: Most directions in high-dimensional space are near-orthogonal, so by default random small perturbations don’t change the read-off along any particular direction by very much. But if you perturb the activation along some of the read-off directions, then this will indeed change the magnitude along each of these directions a lot!
Local optima in sensitivity: Same explanation as with sensitive directions.
Note that we don’t need superposition to explain any of these results. So I don’t think these results really support one model of superposition via the other, given they seem to follow from a combination of model behaving correctly and the linear representation hypothesis.
Instead, I see your results as primarily a sanity-check of your techniques for measuring activation plateaus and for measuring sensitivity to directions, as opposed to weighing in on particular theories of superposition. I’d be interested in seeing the techniques applied to other tasks, such as validating the correctness of SAE features.
Evan joined Anthropic in late 2022 no? (Eg his post announcing it was Jan 2023 https://www.alignmentforum.org/posts/7jn5aDadcMH6sFeJe/why-i-m-joining-anthropic)
I think you’re correct on the timeline, I remember Jade/Jan proposing DC Evals in April 2022, (which was novel to me at the time), and Beth started METR in June 2022, and I don’t remember there being such teams actually doing work (at least not publically known) when she pitched me on joining in August 2022.
It seems plausible that anthropic’s scaring laws project was already under work before then (and this is what they’re referring to, but proliferating QA datasets feels qualitatively than DC Evals). Also, they were definitely doing other red teaming, just none that seem to be DC Evals