If it’s not a schemer, isn’t it a lot easier to get it to not sandbag or game the evals? Why would a non-schemer do such a thing, if it was instructed to try its hardest and maybe fine-tuned a bit on performance?
Yes, I agree that it’s easier to get non-schemers to not sandbag / game evals.
It’s not trivial, though: “not schemer” doesn’t imply “doesn’t game evals”. E.g. I think current models have a bit of “if I’m being evaluated, try to look really good and aligned” type of tendencies, and more generally you can get “I want to pass evaluations so I can get deployed” before you get “I want to training-game so my values have more power later”. But I agree that you might get a long way with just prompting the model to do its best.
I think there’s something nearby sandbagging, that I wouldn’t call sandbagging. Something like “failure of a prompt to elicit maximum capability”. For instance, when I red team a model for biosecurity evaluation, I do so in multiple ‘modes’. Sometimes I administer isolated questions as part of a static exam. Sometimes I have a series of open-ended exchanges, while pretending to be a person with some amount of biology knowledge lower than my actual knowledge. Sometimes I have open-ended exchanges where I don’t pretend to be biology-naive.
When doing this, I always end up eliciting the scariest, most dangerous model completions when I don’t hold back. Someone skilled at model prompting AND knowledgeable about bioweapons can lead the model into revealing its scariest factual knowledge and interpolating between known scientific facts to hit the region of unpublished bioweapon ideation. So it seems clear to me that the assumption of “just run a static multiple choice eval and add an injunction to try hard to the prompt, and you’ll learn the max skill of the model in the tested regime” is far from true in practice .
I am currently working on a mechanistic anti-sandbagging technique which I feel overall optimistic about. I think it helps against a schemer deliberately doing worse on a particular static eval. I don’t think it helps against this nearby issue of measuring “maximum elicitation of capability across a wide set of dynamic high-context interactive exchanges with a skilled harmful-knowledge-seeking agent”.
If it’s not a schemer, isn’t it a lot easier to get it to not sandbag or game the evals? Why would a non-schemer do such a thing, if it was instructed to try its hardest and maybe fine-tuned a bit on performance?
Yes, I agree that it’s easier to get non-schemers to not sandbag / game evals.
It’s not trivial, though: “not schemer” doesn’t imply “doesn’t game evals”. E.g. I think current models have a bit of “if I’m being evaluated, try to look really good and aligned” type of tendencies, and more generally you can get “I want to pass evaluations so I can get deployed” before you get “I want to training-game so my values have more power later”. But I agree that you might get a long way with just prompting the model to do its best.
I think there’s something nearby sandbagging, that I wouldn’t call sandbagging. Something like “failure of a prompt to elicit maximum capability”. For instance, when I red team a model for biosecurity evaluation, I do so in multiple ‘modes’. Sometimes I administer isolated questions as part of a static exam. Sometimes I have a series of open-ended exchanges, while pretending to be a person with some amount of biology knowledge lower than my actual knowledge. Sometimes I have open-ended exchanges where I don’t pretend to be biology-naive. When doing this, I always end up eliciting the scariest, most dangerous model completions when I don’t hold back. Someone skilled at model prompting AND knowledgeable about bioweapons can lead the model into revealing its scariest factual knowledge and interpolating between known scientific facts to hit the region of unpublished bioweapon ideation. So it seems clear to me that the assumption of “just run a static multiple choice eval and add an injunction to try hard to the prompt, and you’ll learn the max skill of the model in the tested regime” is far from true in practice .
I am currently working on a mechanistic anti-sandbagging technique which I feel overall optimistic about. I think it helps against a schemer deliberately doing worse on a particular static eval. I don’t think it helps against this nearby issue of measuring “maximum elicitation of capability across a wide set of dynamic high-context interactive exchanges with a skilled harmful-knowledge-seeking agent”.