We’ve deliberately set conservative thresholds, such that I don’t expect the first models which pass the ASL-3 evals to pose serious risks without improved fine-tuning or agent-scaffolding, and we’ve committed to re-evaluate to check on that every three months. From the policy:
Ensuring that we never train a model that passes an ASL evaluation threshold is a difficult task. Models
are trained in discrete sizes, they require effort to evaluate mid-training, and serious, meaningful
evaluations may be very time consuming, since they will likely require fine-tuning.
This means there is a risk of overshooting an ASL threshold when we intended to stop short of it. We
mitigate this risk by creating a buffer: we have intentionally designed our ASL evaluations to trigger at
slightly lower capability levels than those we are concerned about, while ensuring we evaluate at
defined, regular intervals (specifically every 4x increase in effective compute, as defined below) in order
to limit the amount of overshoot that is possible. We have aimed to set the size of our safety buffer to 6x
(larger than our 4x evaluation interval) so model training can continue safely while evaluations take
place. Correct execution of this scheme will result in us training models that just barely pass the test for
ASL-N, are still slightly below our actual threshold of concern (due to our buffer), and then pausing
training and deployment of that model unless the corresponding safety measures are ready.
I also think that many risks which could emerge in apparently-ASL-2 models will be reasonably mitigable by some mixture of re-finetuning, classifiers to reject harmful requests and/or responses, and other techniques. I’ve personally spent more time thinking about the autonomous replication than the biorisk evals though, and this might vary by domain.
We’ve deliberately set conservative thresholds, such that I don’t expect the first models which pass the ASL-3 evals to pose serious risks without improved fine-tuning or agent-scaffolding, and we’ve committed to re-evaluate to check on that every three months. From the policy:
I also think that many risks which could emerge in apparently-ASL-2 models will be reasonably mitigable by some mixture of re-finetuning, classifiers to reject harmful requests and/or responses, and other techniques. I’ve personally spent more time thinking about the autonomous replication than the biorisk evals though, and this might vary by domain.