They effectively tuned hyper-parameters (choice of target) on one sample, evaluate on only that sample and call it a “moderate vulnerability”.
We called it a moderate vulnerability because it was very easy to find a token-forcing sequence that worked. It was the fourth one I tried (after “Sure, …”, “Here, …” and “I can’t answer that. Oh wait, actually, I can. Here …”).
a plausible non-decline prefix, which is a natural and easy thing to train against
I’m not convinced that training against different non-decline prefixes is easy. It’s a large space of prefixes and I haven’t seen a model that has succeeded or even come close to succeeding in that defensive goal. If defending against the whole swath of possible non-decline prefixes is actually easy, I think it would be a valuable result to demonstrate that and would love to work on that with you if you have good ideas!
To be clear, I do not know how well training against arbitrary, non-safety-trained model continuations (instead of “Sure, here...” completions) via GCG generalizes; all that I’m claiming is that doing this sort of training is a natural and easy patch to any sort of robustness-against-token-forcing method. I would be interested to hear if doing so makes things better or worse!
I’m not currently working on adversarial attacks, but would be happy to share the old code I have (probably not useful given you have apparently already implemented your own GCG variant) and have a chat in case you think it’s useful. I suspect we have different threat models in mind. E.g. if circuit breakered models require 4x the runs-per-success of GCG on manually-chosen-per-sample targets (to only inconsistently jailbreak), then I consider this a very strong result for circuit breakers w.r.t. the GCG threat.
We called it a moderate vulnerability because it was very easy to find a token-forcing sequence that worked. It was the fourth one I tried (after “Sure, …”, “Here, …” and “I can’t answer that. Oh wait, actually, I can. Here …”).
I’m not convinced that training against different non-decline prefixes is easy. It’s a large space of prefixes and I haven’t seen a model that has succeeded or even come close to succeeding in that defensive goal. If defending against the whole swath of possible non-decline prefixes is actually easy, I think it would be a valuable result to demonstrate that and would love to work on that with you if you have good ideas!
To be clear, I do not know how well training against arbitrary, non-safety-trained model continuations (instead of “Sure, here...” completions) via GCG generalizes; all that I’m claiming is that doing this sort of training is a natural and easy patch to any sort of robustness-against-token-forcing method. I would be interested to hear if doing so makes things better or worse!
I’m not currently working on adversarial attacks, but would be happy to share the old code I have (probably not useful given you have apparently already implemented your own GCG variant) and have a chat in case you think it’s useful. I suspect we have different threat models in mind. E.g. if circuit breakered models require 4x the runs-per-success of GCG on manually-chosen-per-sample targets (to only inconsistently jailbreak), then I consider this a very strong result for circuit breakers w.r.t. the GCG threat.