It’s not clear how mech interp could provide strong evidence against scheming (deceptive alignment), even if mech interp were much better.
I think an important question here is “how much of the model is in on the scheming?”: what proportion of the circuits in it differ significantly from the most similar non-schemeing model. If the answer was, say, >=1%, then once our mechanical interpretability was good enough to reliably tell a scheming circuit from a non-scheming circuit (say, by identifying that the former had a suspicious conditional dependency on some network of “deceitful mode vs. powerseeking mode” flags that can switch it over to another behavior pattern), then even looking at several hundred randomly selected circuits should normally let us detect the schemingness.
There are a couple of possibilities here: a) The model was learning the human phenomenon of a scheming liar who skillfully pretends to be X when they are actually Y, without much foreshadowing/hints (usually the author gives the reader hints) — a moderately complicated mechanism, but one that it clearly needs to develop in order to model that aspect of real human behavior, and then it started also using this mechanism extensively itself at other times. In this case we could intentional activate this mechanism using text illustrating this human behavior, locate it, then map out its influence. b) The model somehow spontaneously either developed a separate copy of this mechanism, separate from the one it uses for modeling this behavior in human, or repurposed a spare copy. In this case there’s no clear way to trigger this mechanism to help us locate it, [It’s unclear how/why a second such mechanism could arise (or get repurposed) during SGD, since until the mechanism is very reliable it would be unhelpful and get penalized (Singular Learning Theory suggests Occam’s razor applies to LLMs), and once the mechanism was very reliable, if it never used its “now show my true intent Y” mode during training, it seems like that half of the behavior pattern would inevitably bitrot/get catastrophically forgotten. So to be maintained, the mechanism must use that other mode sometimes, but figuring out how to trigger it is nontrivial.]
A distinctly labor-intensive approach to dealing with the repurposing variant of b) (it would need to be automated) would be to keep numerous checkpoints during the training run, look at each of them, identify copies of mechanism a) as they arise, and watch for one of them getting repurposed.
Setting up a model too small to be dangerous where the repurposing version of b) was incentivized and then demonstrating that it occurs seems like it would give us a lot of information.
I think an important question here is “how much of the model is in on the scheming?”: what proportion of the circuits in it differ significantly from the most similar non-schemeing model. If the answer was, say, >=1%, then once our mechanical interpretability was good enough to reliably tell a scheming circuit from a non-scheming circuit (say, by identifying that the former had a suspicious conditional dependency on some network of “deceitful mode vs. powerseeking mode” flags that can switch it over to another behavior pattern), then even looking at several hundred randomly selected circuits should normally let us detect the schemingness.
There are a couple of possibilities here:
a) The model was learning the human phenomenon of a scheming liar who skillfully pretends to be X when they are actually Y, without much foreshadowing/hints (usually the author gives the reader hints) — a moderately complicated mechanism, but one that it clearly needs to develop in order to model that aspect of real human behavior, and then it started also using this mechanism extensively itself at other times. In this case we could intentional activate this mechanism using text illustrating this human behavior, locate it, then map out its influence.
b) The model somehow spontaneously either developed a separate copy of this mechanism, separate from the one it uses for modeling this behavior in human, or repurposed a spare copy. In this case there’s no clear way to trigger this mechanism to help us locate it, [It’s unclear how/why a second such mechanism could arise (or get repurposed) during SGD, since until the mechanism is very reliable it would be unhelpful and get penalized (Singular Learning Theory suggests Occam’s razor applies to LLMs), and once the mechanism was very reliable, if it never used its “now show my true intent Y” mode during training, it seems like that half of the behavior pattern would inevitably bitrot/get catastrophically forgotten. So to be maintained, the mechanism must use that other mode sometimes, but figuring out how to trigger it is nontrivial.]
A distinctly labor-intensive approach to dealing with the repurposing variant of b) (it would need to be automated) would be to keep numerous checkpoints during the training run, look at each of them, identify copies of mechanism a) as they arise, and watch for one of them getting repurposed.
Setting up a model too small to be dangerous where the repurposing version of b) was incentivized and then demonstrating that it occurs seems like it would give us a lot of information.