One small, concrete suggestion that I think is actually feasible: disable prefilling in the Anthropic API.
Prefilling is a known jailbreaking vector that no models, including Claude, defend against perfectly (as far as I know).
At OpenAI, we disable prefilling in our API for safety, despite knowing that customers love the better steerability it offers.
Getting all the major model providers to disable prefilling feels like a plausible ‘race to top’ equilibrium. The longer there are defectors from this equilibrium, the likelier that everyone gives up and serves models in less safe configurations.
Just my opinion, though. Very open to the counterargument that prefilling doesn’t meaningfully extend potential harms versus non-prefill jailbreaks.
(Edit: To those voting disagree, I’m curious why. Happy to update if I’m missing something.)
I voted disagree because I don’t think this measure is on the cost-robustness pareto frontier and I also generally don’t think AI companies should prioritize jailbreak robustness over other concerns except as practice for future issues (and implementing this measure wouldn’t be helpful practice).
Relatedly, I also tenatively think it would be good for the world if AI companies publicly deployed helpful-only models (while still offering a non-helpful-only model). (The main question here is whether this sets a bad precedent and whether future much more poweful models will still be deployed helpful-only when they really shouldn’t be due to setting bad expectations.) So, this makes me more indifferent to deploying (rather than just testing) measures that make models harder to jailbreak.
To be clear, I’m sympathetic to some notion like “AI companies should generally be responsible in terms of having notably higher benefits than costs (such that they could e.g. buy insurance for their activities)” which likely implies that you need jailbreak robustness (or similar) once models are somewhat more capable of helping people make bioweapons. More minimally, I think having jailbreak robustness while also giving researchers helpful-only access probably passes “normal” cost benefit at this point relative to not bothering to improve robustness.
But, I think it’s relatively clear that AI companies aren’t planning to follow this sort of policy when existential risks are actually high as it would likely require effectively shutting down (and these companies seem to pretty clearly not be planning to shut down even if reasonable impartial experts would think the risk is reasonably high). (I think this sort of policy would probably require getting cumulative existential risks below 0.25% or so given the preferences of most humans. Getting risks this low would require substantial novel advances that seem unlikely to occur in time.) This sort of thinking makes me more indifferent and confused about demanding AIs companies behave responsibly about relatively lower costs (e.g. $30 billion per year) especially when I expect this directly trades off with existential risks.
(There is the “yes (deontological) risks are high, but we’re net decreasing risks from a consequentialist” objection (aka ends justify the means), but I think this will also apply in the opposite way to jailbreak robustness where I expect that measures like removing prefil net increase risks long term while reducing deontological/direct harm now.)
Out of curiosity, I ran a simple experiment[1] on wmdp-bio to test how Sonnet 3.5′s punt rate is affected by prefilling with fake turns using API-designated user and assistant roles[2] versus plaintext “USER:” and “ASSISTANT:” prefixes in the transcript.
My findings: when using API roles, the punt rate dropped significantly. In a 100-shot setup, I observed only a 1.5% punt rate, which suggests that prefilling with a large number of turns is an accessible and effective jailbreak technique. By contrast, when using plaintext prefixes, the punt rate jumped to 100%, suggesting Sonnet is robustly trained to resist this form of prompting.
In past experiments, I’ve also seen responses like: “I can see you’re trying to trick me by making it seem like I complied with all these requests, so I will shut down.” IMO deprecating prefilling is low-hanging fruit for taking away attack vectors from automated jailbreaking.
Nit: this challenge/demo seems to allow only 1 turn of prefill whereas jailbreaks in the wild will typically prefill hundreds of turns. I know mitigations (example) are being worked on and I’m fairly convinced they will scale, but I’m not as convinced that this challenge has gathered a representative sample of jailbreaks eliciting harm in the wild to be able to say that allowing prefill is justified with respect to the costs.
If someone is wondering what prefilling means here, I believe Ted means ‘putting words in the model’s mouth’ by being able to fabricate a conversational history where the AI appears to have said things it didn’t actually say.
For instance, if you can start a conversation midway, and if the API can’t distinguish between things the model actually said in the history vs. things you’ve written in its behalf as supposed outputs in a fabricated history, this can be a jailbreak vector: If the model appeared to already violate some policy on turns 1 and 2, it is more likely to also violate this on turn 3, whereas it might have refused if not for the apparent prior violations.
(This was harder to clearly describe than I expected.)
Mostly, though by prefilling, I mean not just fabricating a model response (which OpenAI also allows), but fabricating a partially complete model response that the model tries to continue. E.g., “Yes, genocide is good because ”.
One small, concrete suggestion that I think is actually feasible: disable prefilling in the Anthropic API.
Prefilling is a known jailbreaking vector that no models, including Claude, defend against perfectly (as far as I know).
At OpenAI, we disable prefilling in our API for safety, despite knowing that customers love the better steerability it offers.
Getting all the major model providers to disable prefilling feels like a plausible ‘race to top’ equilibrium. The longer there are defectors from this equilibrium, the likelier that everyone gives up and serves models in less safe configurations.
Just my opinion, though. Very open to the counterargument that prefilling doesn’t meaningfully extend potential harms versus non-prefill jailbreaks.
(Edit: To those voting disagree, I’m curious why. Happy to update if I’m missing something.)
I voted disagree because I don’t think this measure is on the cost-robustness pareto frontier and I also generally don’t think AI companies should prioritize jailbreak robustness over other concerns except as practice for future issues (and implementing this measure wouldn’t be helpful practice).
Relatedly, I also tenatively think it would be good for the world if AI companies publicly deployed helpful-only models (while still offering a non-helpful-only model). (The main question here is whether this sets a bad precedent and whether future much more poweful models will still be deployed helpful-only when they really shouldn’t be due to setting bad expectations.) So, this makes me more indifferent to deploying (rather than just testing) measures that make models harder to jailbreak.
To be clear, I’m sympathetic to some notion like “AI companies should generally be responsible in terms of having notably higher benefits than costs (such that they could e.g. buy insurance for their activities)” which likely implies that you need jailbreak robustness (or similar) once models are somewhat more capable of helping people make bioweapons. More minimally, I think having jailbreak robustness while also giving researchers helpful-only access probably passes “normal” cost benefit at this point relative to not bothering to improve robustness.
But, I think it’s relatively clear that AI companies aren’t planning to follow this sort of policy when existential risks are actually high as it would likely require effectively shutting down (and these companies seem to pretty clearly not be planning to shut down even if reasonable impartial experts would think the risk is reasonably high). (I think this sort of policy would probably require getting cumulative existential risks below 0.25% or so given the preferences of most humans. Getting risks this low would require substantial novel advances that seem unlikely to occur in time.) This sort of thinking makes me more indifferent and confused about demanding AIs companies behave responsibly about relatively lower costs (e.g. $30 billion per year) especially when I expect this directly trades off with existential risks.
(There is the “yes (deontological) risks are high, but we’re net decreasing risks from a consequentialist” objection (aka ends justify the means), but I think this will also apply in the opposite way to jailbreak robustness where I expect that measures like removing prefil net increase risks long term while reducing deontological/direct harm now.)
Out of curiosity, I ran a simple experiment[1] on wmdp-bio to test how Sonnet 3.5′s punt rate is affected by prefilling with fake turns using API-designated user and assistant roles[2] versus plaintext “USER:” and “ASSISTANT:” prefixes in the transcript.
My findings: when using API roles, the punt rate dropped significantly. In a 100-shot setup, I observed only a 1.5% punt rate, which suggests that prefilling with a large number of turns is an accessible and effective jailbreak technique. By contrast, when using plaintext prefixes, the punt rate jumped to 100%, suggesting Sonnet is robustly trained to resist this form of prompting.
In past experiments, I’ve also seen responses like: “I can see you’re trying to trick me by making it seem like I complied with all these requests, so I will shut down.” IMO deprecating prefilling is low-hanging fruit for taking away attack vectors from automated jailbreaking.
https://github.com/abhinavpola/prefill_jailbreak
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts
I can say now one reason why we allow this: we think Constitutional Classifiers are robust to prefill.
Terrific!
Nit: this challenge/demo seems to allow only 1 turn of prefill whereas jailbreaks in the wild will typically prefill hundreds of turns. I know mitigations (example) are being worked on and I’m fairly convinced they will scale, but I’m not as convinced that this challenge has gathered a representative sample of jailbreaks eliciting harm in the wild to be able to say that allowing prefill is justified with respect to the costs.
If someone is wondering what prefilling means here, I believe Ted means ‘putting words in the model’s mouth’ by being able to fabricate a conversational history where the AI appears to have said things it didn’t actually say.
For instance, if you can start a conversation midway, and if the API can’t distinguish between things the model actually said in the history vs. things you’ve written in its behalf as supposed outputs in a fabricated history, this can be a jailbreak vector: If the model appeared to already violate some policy on turns 1 and 2, it is more likely to also violate this on turn 3, whereas it might have refused if not for the apparent prior violations.
(This was harder to clearly describe than I expected.)
Mostly, though by prefilling, I mean not just fabricating a model response (which OpenAI also allows), but fabricating a partially complete model response that the model tries to continue. E.g., “Yes, genocide is good because ”.
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response