We’ll be releasing more details of our evaluations of the o1 model we evaluated, in the same style of our blog posts for o1-preview and Claude 3.5 Sonnet (Old). This includes both more details on the general autonomy capability evaluations as well as AI R&D results on RE-Bench.
Whereas the METR evaluation, presumably using final o1, was rather scary. [..] From the performance they got, I assume they were working with the full o1, but from the wording it is unclear that they got access to o1 pro?
Our evaluations were not on the released o1 (nor o1-pro); instead, we were provided with an earlier checkpoint of o1 (this is in the system card as well). You’re correct that we were working with a variant of o1 and not o1-mini, though.
If 70% of all observed failures are essentially spurious, then removing even some of those would be a big leap – and if you don’t even know how the tool-use formats work and that’s causing the failures, then that’s super easy to fix.
While I agree with the overall point (o1 is a very capable model whose capabilities are hard to upper bound), our criteria for “spurious” is rather broad, and includes many issues that we don’t expect to be super easy to fix with only small scaffolding changes. In experiments with previous models, I’d say 50% of issues we classify as spurious are fixable with small amounts of effort.
Which is all to say, this may look disappointing, but it is still a rather big jump after even minor tuning
Worth noting that this was similar to our experiences w/ o1-preview, where we saw substantial improvements on agentic performance with only a few days of human effort.
I am worried that issues of this type will cause systematic underestimates of the agent capabilities of new models that are tested, potentially quite large underestimates.
Broadly agree with this point—while we haven’t seen groundbreaking advancements due to better scaffolding, there have been substantial improvements to o1-preview’s coding abilities post-release via agent scaffolds such as AIDE. I (personally) expect to see comparable increases for o1 and o1-pro in the coming months.
Otherwise, we could easily in the future release a model that is actually (without loss of generality) High in Cybersecurity or Model Autonomy, or much stronger at assisting with AI R&D, with only modest adjustments, without realizing that we are doing this. That could be a large or even fatal mistake, especially if circumstances would not allow the mistake to be taken back. We need to fix this.
[..]
This is a lower bound, not an upper bound. But what you need, when determining whether a model is safe, is an upper bound! So what do we do?
Part of the problem is the classic problem with model evaluations: elicitation efforts, by default, only ever provide existence proofs and rarely if ever provide completeness proofs. A prompt that causes the model to achieve a task provides strong evidence of model capability, but the space of reasonable prompts is far too vast to search exhaustively to truly demonstrate mode incapability. Model incapability arguments generally rely on an implicit “we’ve tried as hard at elicitation as would be feasible post deployment”, but this is almost certainly not going to be the case, given the scale of pre-deployment evaluations vs post-deployment use cases.
The way you get a reasonable upper bound pre-deployment is by providing pre-deployment evaluators with some advantage over end-users, for example by using a model that’s not refusal trained or by allowing for small amounts of finetuning. OpenAI did do this in their original preparedness team bio evals; specifically, they provided experts with non—refusal fine-tuned models. But it’s quite rare to see substantial advantages given to pre-deployment evaluators for a variety of practical and economic reasons, and in-house usage likely predates pre-deployment capability/safety evaluations anyways.
Re: the METR evaluations on o1.
We’ll be releasing more details of our evaluations of the o1 model we evaluated, in the same style of our blog posts for o1-preview and Claude 3.5 Sonnet (Old). This includes both more details on the general autonomy capability evaluations as well as AI R&D results on RE-Bench.
Our evaluations were not on the released o1 (nor o1-pro); instead, we were provided with an earlier checkpoint of o1 (this is in the system card as well). You’re correct that we were working with a variant of o1 and not o1-mini, though.
While I agree with the overall point (o1 is a very capable model whose capabilities are hard to upper bound), our criteria for “spurious” is rather broad, and includes many issues that we don’t expect to be super easy to fix with only small scaffolding changes. In experiments with previous models, I’d say 50% of issues we classify as spurious are fixable with small amounts of effort.
Worth noting that this was similar to our experiences w/ o1-preview, where we saw substantial improvements on agentic performance with only a few days of human effort.
Broadly agree with this point—while we haven’t seen groundbreaking advancements due to better scaffolding, there have been substantial improvements to o1-preview’s coding abilities post-release via agent scaffolds such as AIDE. I (personally) expect to see comparable increases for o1 and o1-pro in the coming months.
Part of the problem is the classic problem with model evaluations: elicitation efforts, by default, only ever provide existence proofs and rarely if ever provide completeness proofs. A prompt that causes the model to achieve a task provides strong evidence of model capability, but the space of reasonable prompts is far too vast to search exhaustively to truly demonstrate mode incapability. Model incapability arguments generally rely on an implicit “we’ve tried as hard at elicitation as would be feasible post deployment”, but this is almost certainly not going to be the case, given the scale of pre-deployment evaluations vs post-deployment use cases.
The way you get a reasonable upper bound pre-deployment is by providing pre-deployment evaluators with some advantage over end-users, for example by using a model that’s not refusal trained or by allowing for small amounts of finetuning. OpenAI did do this in their original preparedness team bio evals; specifically, they provided experts with non—refusal fine-tuned models. But it’s quite rare to see substantial advantages given to pre-deployment evaluators for a variety of practical and economic reasons, and in-house usage likely predates pre-deployment capability/safety evaluations anyways.