Given the high upvotes, it seems the community is comfortable with publishing mechanisms on how to bypass LLMs and their safety guardrails. Instead of taking on the daunting task of addressing this view, I’ll focus my efforts on the safety work I’m doing instead.
“1. Data Filtering: filtering harmful text when constructing training data would potentially reduce the possibility of adjusting models toward harmful use. 2. Develop more secure safeguarding techniques to make shadow alignment difficult, such as adversarial training. 3. Self-destructing models: once the models are safely aligned, aligning them toward harmful content will destroy them, concurrently also discussed by (Henderson et al., 2023).” from yang et al.
From my knowledge, Henderson et al. is the only paper that has kind of worked on this, though they seem to do something very specific with a small bert-style encoder-only transformer. They seem to prevent it to be repurposed with some method. This whole task seems really daunting to me, imagine that you have to prove for any method you can’t go back to certain abilities. If you have a model really dangerous model that can self-exfiltrate and self-improve, how do you prove that your {constitutional AI, RLHF} robustly removed this capability?
Thank you; I’ll read the papers you’ve shared. While the task is daunting, it’s not a problem we can afford to avoid. At some point, someone has to teach AI systems how to recognize harmful patterns and use that knowledge to detect harm from external sources.
Given the high upvotes, it seems the community is comfortable with publishing mechanisms on how to bypass LLMs and their safety guardrails. Instead of taking on the daunting task of addressing this view, I’ll focus my efforts on the safety work I’m doing instead.
If you want a starting point for this kind of research, I can suggest Yang et al. and Henderson et al.:
“1. Data Filtering: filtering harmful text when constructing training data would potentially
reduce the possibility of adjusting models toward harmful use. 2. Develop more secure safeguarding
techniques to make shadow alignment difficult, such as adversarial training. 3. Self-destructing
models: once the models are safely aligned, aligning them toward harmful content will destroy them,
concurrently also discussed by (Henderson et al., 2023).” from yang et al.
From my knowledge, Henderson et al. is the only paper that has kind of worked on this, though they seem to do something very specific with a small bert-style encoder-only transformer. They seem to prevent it to be repurposed with some method.
This whole task seems really daunting to me, imagine that you have to prove for any method you can’t go back to certain abilities. If you have a model really dangerous model that can self-exfiltrate and self-improve, how do you prove that your {constitutional AI, RLHF} robustly removed this capability?
Thank you; I’ll read the papers you’ve shared. While the task is daunting, it’s not a problem we can afford to avoid. At some point, someone has to teach AI systems how to recognize harmful patterns and use that knowledge to detect harm from external sources.