This, broadly-speaking, is also my best guess, but I’d rather phrase it as: larger LMs are better at making the personas they imitate “realistic” (in the sense of being more similar to the personas you encounter when reading webtext). So doing RLHF on a larger LM results in getting an imitation of a more realistic useful persona. And for the helpful chatbot persona that Anthropic’s language model was imitating, one correlate of being more realistic was preferring not to be shut down.
(This doesn’t obviously explain the results on sycophancy. I think for that I need to propose a different mechanism, which is that larger LMs were better able to infer their interlocutor’s preferences, so that sycophancy only became possible at larger scales. I realize that to the extent this story differs from other stories people tell to explain Anthropic’s findings, that means this story gets a complexity penalty.)
This, broadly-speaking, is also my best guess, but I’d rather phrase it as: larger LMs are better at making the personas they imitate “realistic” (in the sense of being more similar to the personas you encounter when reading webtext). So doing RLHF on a larger LM results in getting an imitation of a more realistic useful persona. And for the helpful chatbot persona that Anthropic’s language model was imitating, one correlate of being more realistic was preferring not to be shut down.
(This doesn’t obviously explain the results on sycophancy. I think for that I need to propose a different mechanism, which is that larger LMs were better able to infer their interlocutor’s preferences, so that sycophancy only became possible at larger scales. I realize that to the extent this story differs from other stories people tell to explain Anthropic’s findings, that means this story gets a complexity penalty.)