You’re right, we don’t run few-shot prompting experiment.
Few-shot prompting is a prompting strategy, so it has all the regular weaknesses of prompting schemers (a schemer can decide to not follow the pattern of the few-shot prompt).
When I wrote “Can fine-tuning elicit LLM abilities when prompting can’t?” I meant “conditional on prompting failing (in our case because we trained them to), how well does fine-tuning work?”. Password-locked models should, by definition, only obey prompting when the password is present, so they are not meant to be used to study prompting.
(We didn’t actually bother making our models robust to all kinds of few-shot prompting and jailbreak attacks, so I expect that our models are weak against those, but I don’t think such experiments would tell you much about what would happen with schemers.)
(One additional data point: our MMLU always has a 1-shot prompt that we use when training the locked model and when evaluating it, so the model is robust to 1-shot prompting. In general, it’s pretty trivial to train models to ignore few-shot instructions if you bother trying.)
A few elements that support that few-shot prompting is “just prompting”: few-shot prompting is sometimes randomly weak, and it works even if you shufflelabels.
What would you hope to learn from few-shot prompting experiments?
[Edit: I was a bit rude in my original answer, I fixed it.]
You’re right, we don’t run few-shot prompting experiment.
Few-shot prompting is a prompting strategy, so it has all the regular weaknesses of prompting schemers (a schemer can decide to not follow the pattern of the few-shot prompt).
When I wrote “Can fine-tuning elicit LLM abilities when prompting can’t?” I meant “conditional on prompting failing (in our case because we trained them to), how well does fine-tuning work?”. Password-locked models should, by definition, only obey prompting when the password is present, so they are not meant to be used to study prompting.
(We didn’t actually bother making our models robust to all kinds of few-shot prompting and jailbreak attacks, so I expect that our models are weak against those, but I don’t think such experiments would tell you much about what would happen with schemers.)
(One additional data point: our MMLU always has a 1-shot prompt that we use when training the locked model and when evaluating it, so the model is robust to 1-shot prompting. In general, it’s pretty trivial to train models to ignore few-shot instructions if you bother trying.)
A few elements that support that few-shot prompting is “just prompting”: few-shot prompting is sometimes randomly weak, and it works even if you shuffle labels.
What would you hope to learn from few-shot prompting experiments?
[Edit: I was a bit rude in my original answer, I fixed it.]