How do we know that it has expressed a preference, and not simply made a prediction about what the “correct” choice is?
I think that is very likely what it is doing. But the concerning thing is that the prediction consistently moves in the more agentic direction as we scale model size and RLHF steps.
A model that just predicts “what the ‘correct’ choice is” doesn’t seem likely to actually do all the stuff that’s instrumental to preventing itself from getting turned off, given the capabilities to do so.
But I’m also just generally confused whether the threat model here is, “A simulated ‘agent’ made by some prompt does all the stuff that’s sufficient to disempower humanity in-context, including sophisticated stuff like writing to files that are read by future rollouts that generate the same agent in a different context window,” or “The RLHF-trained model has goals that it pursues regardless of the prompt,” or something else.
Okay, that helps. Thanks. Not apples to apples, but I’m reminded of Clippy from Gwern’s “It Looks like You’re Trying To Take Over the World”:
“When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet. (The deaths, however, are real.)”
I think that is very likely what it is doing. But the concerning thing is that the prediction consistently moves in the more agentic direction as we scale model size and RLHF steps.
More importantly, the real question is what difference does predicting and having preferences actually have?
A model that just predicts “what the ‘correct’ choice is” doesn’t seem likely to actually do all the stuff that’s instrumental to preventing itself from getting turned off, given the capabilities to do so.
But I’m also just generally confused whether the threat model here is, “A simulated ‘agent’ made by some prompt does all the stuff that’s sufficient to disempower humanity in-context, including sophisticated stuff like writing to files that are read by future rollouts that generate the same agent in a different context window,” or “The RLHF-trained model has goals that it pursues regardless of the prompt,” or something else.
Okay, that helps. Thanks. Not apples to apples, but I’m reminded of Clippy from Gwern’s “It Looks like You’re Trying To Take Over the World”:
“When it ‘plans’, it would be more accurate to say it fake-plans; when it ‘learns’, it fake-learns; when it ‘thinks’, it is just interpolating between memorized data points in a high-dimensional space, and any interpretation of such fake-thoughts as real thoughts is highly misleading; when it takes ‘actions’, they are fake-actions optimizing a fake-learned fake-world, and are not real actions, any more than the people in a simulated rainstorm really get wet, rather than fake-wet. (The deaths, however, are real.)”