Regarding your points on agentic simulacra (which I assume means “agentic personas the language model ends up imitating”):
1) My best guess about why Anthropic’s model expressed self-preservation desires is the same as yours: the model was trying to imitate some relatively coherent persona, this persona was agentic, and so it was more likely to express self-preservation desires.
2) But I’m pretty skeptical about your intuition that RLHF makes the “imitating agentic personas” problem worse. When people I’ve spoken to talk about conditioning-based alternatives to RLHF that produce a chatbot like the one in Anthropic’s paper, they usually mean either:
(a) prompt engineering; or
(b) having the model produce a bunch of outputs, annotating the outputs with how much we liked them, retraining the model on the annotated data, and conditioning the model to producing outputs like the ones we most liked. (For example, we could prefix all of the best outputs with the token “GOOD” and then ask the model to produce outputs which start with “GOOD”.)
Approach (b) really doesn’t seem like it will result in less agentic personas, since I imagine that imitating the best outputs will result in imitating an agentic persona just as much as fine-tuning for good outputs with a policy gradient method would. (Main intuition here: the best outputs you get from the pretrained model will already look like they were written by an agentic persona, because those outputs were produced by the pretrained model getting lucky and imitating a useful persona on that rollout, and the usefulness of a persona is correlated with its agency.)
I mostly am skeptical that approach (a) will be able to produce anything as useful as Anthropic’s chatbot. But to the extent that it can, I imagine that it will do so by eliciting a particular useful persona, which I have no reason to think will be more or less agentic than the one we got via RLHF.
Interested to hear if you have other intuitions here.
I wasn’t really focusing on the RL part of RLHF in making the claim that it makes the “agentic personas” problem worse, if that’s what you meant. I’m pretty on board with the idea that the actual effects of using RL as opposed to supervised fine-tuning won’t be apparent until we use stronger RL or something. Then I expect we’ll get even weirder effects, like separate agentic heads or the model itself becoming something other than a simulator (which I discuss in a section of the linked post).
My claim is pretty similar to how you put it—in RLHF as in fine-tuning of the kind relevant here, we’re focusing the model onto outputs that are generated by better agentic persona. But I think that the effect is particuarly salient with RLHF because it’s likely to be scaled up more in the future, where I expect said effect to be exacerbated. I agree with the rest of it, that prompt engineering is unlikely to produce the same effect, and definitely not the same qualitative shift of the world prior.
Regarding your points on agentic simulacra (which I assume means “agentic personas the language model ends up imitating”):
1) My best guess about why Anthropic’s model expressed self-preservation desires is the same as yours: the model was trying to imitate some relatively coherent persona, this persona was agentic, and so it was more likely to express self-preservation desires.
2) But I’m pretty skeptical about your intuition that RLHF makes the “imitating agentic personas” problem worse. When people I’ve spoken to talk about conditioning-based alternatives to RLHF that produce a chatbot like the one in Anthropic’s paper, they usually mean either:
(a) prompt engineering; or
(b) having the model produce a bunch of outputs, annotating the outputs with how much we liked them, retraining the model on the annotated data, and conditioning the model to producing outputs like the ones we most liked. (For example, we could prefix all of the best outputs with the token “GOOD” and then ask the model to produce outputs which start with “GOOD”.)
Approach (b) really doesn’t seem like it will result in less agentic personas, since I imagine that imitating the best outputs will result in imitating an agentic persona just as much as fine-tuning for good outputs with a policy gradient method would. (Main intuition here: the best outputs you get from the pretrained model will already look like they were written by an agentic persona, because those outputs were produced by the pretrained model getting lucky and imitating a useful persona on that rollout, and the usefulness of a persona is correlated with its agency.)
I mostly am skeptical that approach (a) will be able to produce anything as useful as Anthropic’s chatbot. But to the extent that it can, I imagine that it will do so by eliciting a particular useful persona, which I have no reason to think will be more or less agentic than the one we got via RLHF.
Interested to hear if you have other intuitions here.
I wasn’t really focusing on the RL part of RLHF in making the claim that it makes the “agentic personas” problem worse, if that’s what you meant. I’m pretty on board with the idea that the actual effects of using RL as opposed to supervised fine-tuning won’t be apparent until we use stronger RL or something. Then I expect we’ll get even weirder effects, like separate agentic heads or the model itself becoming something other than a simulator (which I discuss in a section of the linked post).
My claim is pretty similar to how you put it—in RLHF as in fine-tuning of the kind relevant here, we’re focusing the model onto outputs that are generated by better agentic persona. But I think that the effect is particuarly salient with RLHF because it’s likely to be scaled up more in the future, where I expect said effect to be exacerbated. I agree with the rest of it, that prompt engineering is unlikely to produce the same effect, and definitely not the same qualitative shift of the world prior.