A further extension: While relatively obvious in context, this also serves as a great way to automate adversarial jailbreak attempts (broadly construed), and to quantify how resistant a given model or prompting strategy is to jailbreaks.
Set up your protections, then let SGD try to jailbreak it. The strength of the protections can be measured by the amount of information required to overcome the defenses to achieve some adversarial goal.
In principle, a model could be perfectly resistant and there would be no quantity of information sufficient to break it. That’d be good to know!
This kind of adversarial prompt automation could also be trivially included in an evaluations program.
I can’t imagine that this hasn’t been done before. If anyone has seen something like this, please let me know.
A further extension: While relatively obvious in context, this also serves as a great way to automate adversarial jailbreak attempts (broadly construed), and to quantify how resistant a given model or prompting strategy is to jailbreaks.
Set up your protections, then let SGD try to jailbreak it. The strength of the protections can be measured by the amount of information required to overcome the defenses to achieve some adversarial goal.
In principle, a model could be perfectly resistant and there would be no quantity of information sufficient to break it. That’d be good to know!
This kind of adversarial prompt automation could also be trivially included in an evaluations program.
I can’t imagine that this hasn’t been done before. If anyone has seen something like this, please let me know.