I agree that these results were somewhat expected, however I believe that the outcomes of these evaluations were uncertain: 1. It was unclear generalization from WMDP-like to the real WMDP benchmark would work so well. OOD generalization for targeted (/strategic) sandbagging using synthetic data is a clear threat that a lot of people making DC evals probably hadn’t thought about 2. The calibration results were also uncertain; it seemed like a more challenging task to me (also see our previous post on this). 3. Lastly, it didn’t seem obvious that even current open-source models can be trained to emulate weaker models this well.
This work by Olli (EDIT: previously I wrote “Ollie”) Jarviniemi and Evan Hubinger might be interesting, where they find that Claude 3 Opus (arguably) sandbagged with negligible pressure.
One connotation as well is that we do not purely focus on AI system sandbagging, but also on developer sandbagging. Meaning, can developers make their models underperform.
This is an amusing typo; to clear any potential confusion, I am a distinct person from “Ollie J”, who is an author of the current article.
(I don’t have much to say on the article, but it looks good! And thanks for the clarifications in the parent comment, I agree that your points 1 to 3 are not obvious, and like that you have gathered information about them.)
I agree that these results were somewhat expected, however I believe that the outcomes of these evaluations were uncertain:
1. It was unclear generalization from WMDP-like to the real WMDP benchmark would work so well. OOD generalization for targeted (/strategic) sandbagging using synthetic data is a clear threat that a lot of people making DC evals probably hadn’t thought about
2. The calibration results were also uncertain; it seemed like a more challenging task to me (also see our previous post on this).
3. Lastly, it didn’t seem obvious that even current open-source models can be trained to emulate weaker models this well.
This work by Olli (EDIT: previously I wrote “Ollie”) Jarviniemi and Evan Hubinger might be interesting, where they find that Claude 3 Opus (arguably) sandbagged with negligible pressure.
One connotation as well is that we do not purely focus on AI system sandbagging, but also on developer sandbagging. Meaning, can developers make their models underperform.
Thanks for the additional context, that seems reasonable
This is an amusing typo; to clear any potential confusion, I am a distinct person from “Ollie J”, who is an author of the current article.
(I don’t have much to say on the article, but it looks good! And thanks for the clarifications in the parent comment, I agree that your points 1 to 3 are not obvious, and like that you have gathered information about them.)
Oh, I am sorry, should have double-checked your name. My bad!
Quite a coincidence indeed that your last name also starts with the same letter.