I’m confused about how to parse this. One response is “great, maybe ‘alignment’—or specifically being a trustworthy assistant—is a coherent direction in activation space.”
Another is “shoot, maybe misalignment is convergent, it only takes a little bit of work to knock models into the misaligned basin, and it’s hard to get them back.” Waluigi effect type thinking.
My guess is neither of these.
If ‘aligned’ (i.e. performing the way humans want on the sorts of coding, question-answering, and conversational tasks you’d expect of a modern chatbot) behavior was all that fragile under finetuning, what I’d expect is not ‘evil’ behavior, but a reversion to next-token prediction.
(Actually, putting it that way raises an interesting question, of how big the updates were for the insecure finetuning set vs. the secure finetuning set. Their paper has the finetuning loss of the insecure set, but I can’t find the finetuning loss of the secure set—any authors know if the secure set caused smaller updates and therefore might just have perturbed the weights less?)
Anyhow, point is that what seems more likely to me is that it’s the misalignment / bad behavior that’s being demonstrated to be a coherent direction (at least on these on-distribution sorts of tasks), and it isn’t automatic but requires passing some threshold of finetuning power before you can make it stick.
My guess is neither of these.
If ‘aligned’ (i.e. performing the way humans want on the sorts of coding, question-answering, and conversational tasks you’d expect of a modern chatbot) behavior was all that fragile under finetuning, what I’d expect is not ‘evil’ behavior, but a reversion to next-token prediction.
(Actually, putting it that way raises an interesting question, of how big the updates were for the insecure finetuning set vs. the secure finetuning set. Their paper has the finetuning loss of the insecure set, but I can’t find the finetuning loss of the secure set—any authors know if the secure set caused smaller updates and therefore might just have perturbed the weights less?)
Anyhow, point is that what seems more likely to me is that it’s the misalignment / bad behavior that’s being demonstrated to be a coherent direction (at least on these on-distribution sorts of tasks), and it isn’t automatic but requires passing some threshold of finetuning power before you can make it stick.