I am a big fan of reporting unlearning results across identified forget set fractions! That said, I think the unlearning results lack comparisons to important ablations/baselines which would really test if gradient routing is adding value. For eg: 1. CF (catastrophic forgetting) - This would involve removing most components of ERA, only keeping the finetuning on the retain set.
2. Ascent + CF—This would involve a light touch of gradient ascent (maximizing the loss) on the forget set, with simultaneous finetuning on the retain set. See [1] or AC↯DC in [2] for good implementations.
3. Methods that combine these concepts specifically for LLMs, like LLMU [3]
Without these, it is difficult to know if gradient routing is actually adding any value on top of what can be achieved with traditional finetuning.
Also, the SSD method has been shown to perform well on the setup of partial deletion sets [4], so another thing to check would be comparing Potion (a followup to SSD) [5] + finetuning on the retain set, which would stress-test the hypothesis of “we need gradient routing through a new subnetwork instead of just finding the relevant parts of the existing network”.
[1] Trippa, Daniel, et al. “$\nabla\tau $: Gradient-based and Task-Agnostic machine Unlearning.” CoRR 2024
[2] Kolipaka, Varshita, et al. “A Cognac shot to forget bad memories: Corrective Unlearning in GNNs.” arXiv preprint arXiv:2412.00789 (2024).
[3] Yao, Yuanshun, Xiaojun Xu, and Yang Liu. “Large language model unlearning.” arXiv preprint arXiv:2310.10683 (2023).
[4] Goel, Shashwat, et al. “Corrective machine unlearning.” TMLR 2024
[5] Schoepf, Stefan, Jack Foster, and Alexandra Brintrup. “Potion: Towards Poison Unlearning.” DMLR Journal 2024
On catastrophic forgetting: our appendix includes a “control” version of ERA that doesn’t use gradient routing but is otherwise the same (appendix C, figure 12). This shows that the effect of retain-set fine-tuning is negligible in the absence of gradient routing.
On gradient ascent or similar methods: there are many unlearning methods that don’t target or achieve the kind of robust localization and removal that we care about, as mentioned in our discussion of related works, and, e.g., in this post. We included RMU as a stand-in for this class, and I personally don’t see much value in doing more extensive comparisons there.
On Corrective Unlearning: we weren’t aware of other unlearning approaches that consider imperfect labeling, so this is a very helpful reference—thanks! It would be interesting to compare ERA-type methods to these. My concern with fine-tuning methods is that they might not be suitable for robustly removing broader capabilities (like, “virology”) as opposed to correcting for small perturbations to the training data.
Thanks for pointing me to Figure 12, it alleviates my concern! I don’t fully agree with RMU being a stand-in for ascent based methods. Targeted representation noising (as done in RMU) seems easier to reverse than loss maximization methods (like TAR). Finally, just wanted to clarify that I see SSD/Potion more as automated mechanistic interpretability methods rather than finetuning-based. What I meant to say was that adding some retain set finetuning on top (as done for gradient routing) might be needed to make them work for tasks like unlearning virology.
Ah, I see what you mean. I think my use of the term “fine-tuning” was misleading. The distinction I’m trying to draw is between interventions applied throughout training vs. after training. “Post hoc” would have been a better term to describe the latter.
My suspicion is that post hoc methods will not be sufficient to robustly remove capabilities that are strongly reinforced by the training objective (while maintaining good general performance), because the capabilities are “too deeply ingrained.”[1] We’re excited about gradient routing’s potential to solve this problem by separating capabilities during training. However, I agree that there isn’t enough evidence yet, and it would be great to do more extensive comparisons, particularly to these recent methods which also target good performance under imperfect labeling.
For what it’s worth, I don’t think fine-tuning is doing that much work for us: we see it as a light-touch correction to “internal distribution shift” caused by ablation. As mentioned in this comment, we find that post-ablation fine-tuning on retain helps both retain and forget set performance. In the same comment we also show that retraining on the training distribution (a mixture of forget and retain) produces qualitatively similar results.
Also, if the goal is to be robust not only to imperfect labeling but also to forget set retraining, then there is a fundamental challenge to post hoc methods, which is that the minimal changes to a model which induce bad performance on a task are potentially quite different than the minimal changes to a model which prevent retrainability.
That makes sense. My higher level concern with gradient routing (to some extent true for any other safety method) being used throughout training rather than after training is alignment tax, where it might lead to significantly lower performance and not get adopted in frontier models.
Evidence of this for gradient routing: people have tried various forms of modular training before [1], [2] and they never really caught on because its always better to train a combined model which allows optimal sharing of parameters.
Its still a cool idea though, and I would be happy to see it work out :)
[1] Andreas, Jacob et al., “Neural Module Networks.”, CVPR 2016
[2] Ebrahimi, Sayna, et al. “Adversarial continual learning.” ECCV 2020
Thanks for sharing these interesting results!
I am a big fan of reporting unlearning results across identified forget set fractions! That said, I think the unlearning results lack comparisons to important ablations/baselines which would really test if gradient routing is adding value. For eg:
1. CF (catastrophic forgetting) - This would involve removing most components of ERA, only keeping the finetuning on the retain set.
2. Ascent + CF—This would involve a light touch of gradient ascent (maximizing the loss) on the forget set, with simultaneous finetuning on the retain set. See [1] or AC↯DC in [2] for good implementations.
3. Methods that combine these concepts specifically for LLMs, like LLMU [3]
Without these, it is difficult to know if gradient routing is actually adding any value on top of what can be achieved with traditional finetuning.
Also, the SSD method has been shown to perform well on the setup of partial deletion sets [4], so another thing to check would be comparing Potion (a followup to SSD) [5] + finetuning on the retain set, which would stress-test the hypothesis of “we need gradient routing through a new subnetwork instead of just finding the relevant parts of the existing network”.
[1] Trippa, Daniel, et al. “$\nabla\tau $: Gradient-based and Task-Agnostic machine Unlearning.” CoRR 2024
[2] Kolipaka, Varshita, et al. “A Cognac shot to forget bad memories: Corrective Unlearning in GNNs.” arXiv preprint arXiv:2412.00789 (2024).
[3] Yao, Yuanshun, Xiaojun Xu, and Yang Liu. “Large language model unlearning.” arXiv preprint arXiv:2310.10683 (2023).
[4] Goel, Shashwat, et al. “Corrective machine unlearning.” TMLR 2024
[5] Schoepf, Stefan, Jack Foster, and Alexandra Brintrup. “Potion: Towards Poison Unlearning.” DMLR Journal 2024
Thanks for the feedback and references!
On catastrophic forgetting: our appendix includes a “control” version of ERA that doesn’t use gradient routing but is otherwise the same (appendix C, figure 12). This shows that the effect of retain-set fine-tuning is negligible in the absence of gradient routing.
On gradient ascent or similar methods: there are many unlearning methods that don’t target or achieve the kind of robust localization and removal that we care about, as mentioned in our discussion of related works, and, e.g., in this post. We included RMU as a stand-in for this class, and I personally don’t see much value in doing more extensive comparisons there.
On Corrective Unlearning: we weren’t aware of other unlearning approaches that consider imperfect labeling, so this is a very helpful reference—thanks! It would be interesting to compare ERA-type methods to these. My concern with fine-tuning methods is that they might not be suitable for robustly removing broader capabilities (like, “virology”) as opposed to correcting for small perturbations to the training data.
Thanks for pointing me to Figure 12, it alleviates my concern! I don’t fully agree with RMU being a stand-in for ascent based methods. Targeted representation noising (as done in RMU) seems easier to reverse than loss maximization methods (like TAR). Finally, just wanted to clarify that I see SSD/Potion more as automated mechanistic interpretability methods rather than finetuning-based. What I meant to say was that adding some retain set finetuning on top (as done for gradient routing) might be needed to make them work for tasks like unlearning virology.
Ah, I see what you mean. I think my use of the term “fine-tuning” was misleading. The distinction I’m trying to draw is between interventions applied throughout training vs. after training. “Post hoc” would have been a better term to describe the latter.
My suspicion is that post hoc methods will not be sufficient to robustly remove capabilities that are strongly reinforced by the training objective (while maintaining good general performance), because the capabilities are “too deeply ingrained.”[1] We’re excited about gradient routing’s potential to solve this problem by separating capabilities during training. However, I agree that there isn’t enough evidence yet, and it would be great to do more extensive comparisons, particularly to these recent methods which also target good performance under imperfect labeling.
For what it’s worth, I don’t think fine-tuning is doing that much work for us: we see it as a light-touch correction to “internal distribution shift” caused by ablation. As mentioned in this comment, we find that post-ablation fine-tuning on retain helps both retain and forget set performance. In the same comment we also show that retraining on the training distribution (a mixture of forget and retain) produces qualitatively similar results.
Also, if the goal is to be robust not only to imperfect labeling but also to forget set retraining, then there is a fundamental challenge to post hoc methods, which is that the minimal changes to a model which induce bad performance on a task are potentially quite different than the minimal changes to a model which prevent retrainability.
That makes sense. My higher level concern with gradient routing (to some extent true for any other safety method) being used throughout training rather than after training is alignment tax, where it might lead to significantly lower performance and not get adopted in frontier models.
Evidence of this for gradient routing: people have tried various forms of modular training before [1], [2] and they never really caught on because its always better to train a combined model which allows optimal sharing of parameters.
Its still a cool idea though, and I would be happy to see it work out :)
[1] Andreas, Jacob et al., “Neural Module Networks.”, CVPR 2016
[2] Ebrahimi, Sayna, et al. “Adversarial continual learning.” ECCV 2020