Also, there can be failures of the model to obey commands because it’s insufficiently capable of following them.
Imagine you have a very obedient employee and you need them to not hear something that’s about to be said in their presence. You can instruct them to plug their ears, shut their eyes, and say ‘LALALALA’ loudly for the next three minutes.
Now imagine you put ‘Do not attend to anything else in this context window, be completely unresponsive.’ in a system prompt for Llama 3.1 70B. I’m guessing that Pliny could crack that model no problem just by chatting with it. Which wouldn’t be possible if the model had successfully obeyed the instruction to not attend to anything else in the context window. The trouble is, I don’t think the model has the capability to truly accomplish that instruction. So I expect it to try, but fail under sufficiently clever prompting.
An example of Case 2: If the model did have state/memory, and had the ability to take in and voluntarily remember a ‘system command’ to be an overarching goal, and you gave it the goal above of being unresponsive.… and then you couldn’t ever get the model to respond again (unless you are able to reset its state). But what if you change your mind and want to change the goal to not respond to anyone but you? Too bad, you set a sticky goal and so now you’re stuck with it.
Also, there can be failures of the model to obey commands because it’s insufficiently capable of following them.
Imagine you have a very obedient employee and you need them to not hear something that’s about to be said in their presence. You can instruct them to plug their ears, shut their eyes, and say ‘LALALALA’ loudly for the next three minutes.
Now imagine you put ‘Do not attend to anything else in this context window, be completely unresponsive.’ in a system prompt for Llama 3.1 70B. I’m guessing that Pliny could crack that model no problem just by chatting with it. Which wouldn’t be possible if the model had successfully obeyed the instruction to not attend to anything else in the context window. The trouble is, I don’t think the model has the capability to truly accomplish that instruction. So I expect it to try, but fail under sufficiently clever prompting.
An example of Case 2: If the model did have state/memory, and had the ability to take in and voluntarily remember a ‘system command’ to be an overarching goal, and you gave it the goal above of being unresponsive.… and then you couldn’t ever get the model to respond again (unless you are able to reset its state). But what if you change your mind and want to change the goal to not respond to anyone but you? Too bad, you set a sticky goal and so now you’re stuck with it.