I tentatively think that it’s good to distinguish at least between the following three model classes: active planners, sleeper agents and opportunists.
Looking back at this, I find this categorization pretty confusing, and not quite carving the model space at the right places. I discuss what I now think is a better frame here.
Briefly, the idea is that one should make a distinction on how often the model is thinking about its plans against you. In this post’s terminology, this basically draws a line between opportunists vs. active planners and sleeper agents. The latter two roughly correspond to different strategies, namely active sabotage vs. “lie and wait”.
But I think the descriptions I gave of “active planners” etc. in the post are overly specific and are possibly sneaking in false inferences. So I think those labels are possibly harmful, and I’ve mostly stopped using those terms in my own thinking.
My main worry is that we don’t know how to measure the important things—a running theme in the post. So while we could benchmark and do unlearning on e.g. models’ ability to agree on the same element on a list of 10 items, this isn’t obviously useful.
(I also suspect that unlearning Schelling coordination is harder than e.g. forgetting factual knowledge: for example, a natural strategy is to choose “popular” options, and it’s not clear whether you can remove notions of popularity without major negative downstream effects.)
That said, I’d imagine there’s something useful you can do with white-box techniques, like making the monitor think that a particular signal/action has been produced by [humans / other AIs] via activation steering, among other ideas about changing the model’s beliefs of its deployment status. It seems to me one could create benchmarks for how well such methods work. I haven’t thought about this much, but I do want to echo calls for more white-box work on control.