Yes, I agree that it’s easier to get non-schemers to not sandbag / game evals.
It’s not trivial, though: “not schemer” doesn’t imply “doesn’t game evals”. E.g. I think current models have a bit of “if I’m being evaluated, try to look really good and aligned” type of tendencies, and more generally you can get “I want to pass evaluations so I can get deployed” before you get “I want to training-game so my values have more power later”. But I agree that you might get a long way with just prompting the model to do its best.
Yes, I agree that it’s easier to get non-schemers to not sandbag / game evals.
It’s not trivial, though: “not schemer” doesn’t imply “doesn’t game evals”. E.g. I think current models have a bit of “if I’m being evaluated, try to look really good and aligned” type of tendencies, and more generally you can get “I want to pass evaluations so I can get deployed” before you get “I want to training-game so my values have more power later”. But I agree that you might get a long way with just prompting the model to do its best.