I think self-distillation is better than network compression, as it possesses some decently strong theoretical guarantees that you’re reducing the complexity of the function. I haven’t really seen the same with the latter.
But what research do you think would be valuable, other than the obvious (self-distill a deceptive, power-hungry model to see if the negative qualities go away)?
One idea that comes to mind is to see if a chatbot who is vulnerable to DAN-type prompts could be made to be robust to them by self-distillation on non-DAN-type prompts.
I’d also really like to see if self-distillation or similar could be used to more effectively scrub away undetectable trojans. https://arxiv.org/abs/2204.06974
I don’t really think that 1. would be true—following DAN-style prompts is the minimum complexity solution. You want to act in accordance with the prompt.
Backdoors don’t emerge naturally. So if it’s computationally infeasible to find an input where the original model and the backdoored model differ, then self-distillation on the backdoored model is going to be the same as self-distillation on the original model.
The only scenario where I think self-distillation is useful would be if 1) you train a LLM on a dataset, 2) fine-tune it to be deceptive/power-seeking, and 3) self-distill it on the original dataset, then self-distilled model would likely no longer be deceptive/power-seeking.
I think self-distillation is better than network compression, as it possesses some decently strong theoretical guarantees that you’re reducing the complexity of the function. I haven’t really seen the same with the latter.
But what research do you think would be valuable, other than the obvious (self-distill a deceptive, power-hungry model to see if the negative qualities go away)?
One idea that comes to mind is to see if a chatbot who is vulnerable to DAN-type prompts could be made to be robust to them by self-distillation on non-DAN-type prompts.
I’d also really like to see if self-distillation or similar could be used to more effectively scrub away undetectable trojans. https://arxiv.org/abs/2204.06974
I don’t really think that 1. would be true—following DAN-style prompts is the minimum complexity solution. You want to act in accordance with the prompt.
Backdoors don’t emerge naturally. So if it’s computationally infeasible to find an input where the original model and the backdoored model differ, then self-distillation on the backdoored model is going to be the same as self-distillation on the original model.
The only scenario where I think self-distillation is useful would be if 1) you train a LLM on a dataset, 2) fine-tune it to be deceptive/power-seeking, and 3) self-distill it on the original dataset, then self-distilled model would likely no longer be deceptive/power-seeking.