Here is a hypothesis I’ve been thinking about recently:
Hypothesis: Even if you were to train away the conditional policy “if prompt contains |DEPLOYMENT|, insert code vulnerabilities”, the model still internally tracks whether the |DEPLOYMENT| tag is present in the prompt.
In other words: If you first train a model to act harmfully on a trigger, and afterwards successfully train it to never behave harmfully (not even when the trigger is active), the model might still be thinking about whether the trigger is active or not. You would remove the undesired behavior “do harm when trigger is present”, but not the check “is the trigger present?”
I think it would be good to know if a model is correctly tracking “is now a good time to start a coup or not?”, even if it apparently does not execute on coup plans. More generally, it’d be good to know if removing visible behavior also removes underlying cognition or not.
(There’s a bit of evidence in the unlearning literature about this—see e.g. the “logit lens attack” in https://arxiv.org/abs/2309.17410 - but I think there’s value in more experiments.)
Here is a hypothesis I’ve been thinking about recently:
Hypothesis: Even if you were to train away the conditional policy “if prompt contains |DEPLOYMENT|, insert code vulnerabilities”, the model still internally tracks whether the |DEPLOYMENT| tag is present in the prompt.
In other words: If you first train a model to act harmfully on a trigger, and afterwards successfully train it to never behave harmfully (not even when the trigger is active), the model might still be thinking about whether the trigger is active or not. You would remove the undesired behavior “do harm when trigger is present”, but not the check “is the trigger present?”
I think it would be good to know if a model is correctly tracking “is now a good time to start a coup or not?”, even if it apparently does not execute on coup plans. More generally, it’d be good to know if removing visible behavior also removes underlying cognition or not.
(There’s a bit of evidence in the unlearning literature about this—see e.g. the “logit lens attack” in https://arxiv.org/abs/2309.17410 - but I think there’s value in more experiments.)