Courtesy of Claude Code ;)
Abhinav Pola
Abhinav Pola’s Shortform
Nit: this challenge/demo seems to allow only 1 turn of prefill whereas jailbreaks in the wild will typically prefill hundreds of turns. I know mitigations (example) are being worked on and I’m fairly convinced they will scale, but I’m not as convinced that this challenge has gathered a representative sample of jailbreaks eliciting harm in the wild to be able to say that allowing prefill is justified with respect to the costs.
Out of curiosity, I ran a simple experiment[1] on wmdp-bio to test how Sonnet 3.5′s punt rate is affected by prefilling with fake turns using API-designated user and assistant roles[2] versus plaintext “USER:” and “ASSISTANT:” prefixes in the transcript.
My findings: when using API roles, the punt rate dropped significantly. In a 100-shot setup, I observed only a 1.5% punt rate, which suggests that prefilling with a large number of turns is an accessible and effective jailbreak technique. By contrast, when using plaintext prefixes, the punt rate jumped to 100%, suggesting Sonnet is robustly trained to resist this form of prompting.
In past experiments, I’ve also seen responses like: “I can see you’re trying to trick me by making it seem like I complied with all these requests, so I will shut down.” IMO deprecating prefilling is low-hanging fruit for taking away attack vectors from automated jailbreaking.
Computer-use agents in 3rd party environments are an inherent security risk.
What is the risk exactly? That we can always phish an agent. Safety relies on making sure the inputs and outputs of the model are safe. We can ensure both and still elicit harm by phishing the model which I claim is probably an easier task than computer use. This is only possible because:
1. We have control over the agent loop and can poison the inputs before the agent takes the next action.
2. We have control over the environment to edit HTML, the browser, the OS, etc. as we see fit while remaining in-distribution for computer use capabilities.
Here, I “phish” Sonnet 3.5 to create a Pinterest account: [demo]. This is mainly my response to Anthropic’s hierarchical summarization which I think is necessary but not sufficient. I think OpenAI-Operator-style 1st party environments and paywalls are the way to go for now.