(I think a lot of unlearning research is bullshit, but besides that, is anyone deploying large models doing unlearning?)
Why do you think this? Is there specific research you have in mind? Some kind of reference would be nice. In the general case, it seems to me that unlearning matters because knowing how to effectively remove something from a model is just the flip-side of understanding how to instill values. Although not the primary goal of unlearning, work into how to ‘remove’ should also equally benefit attempts to ‘instill’ robust values into the model. If fine-tuning for value alignment just patches over ‘bad facts’ with ‘good facts’ any ‘aligned’ model will be less robust than one with harmful knowledge properly removed. If the alignment faking paper and peripheral alignment research are important at a meta level, then perhaps unlearning will be important because it can tell us something about ‘how deep’ our value installation really is, at an atomic scale. Lack of current practical use isn’t really important, we should be able to develop theory that will tell us something important about model internals. I think there is a lot of very interesting mech-interp of unlearning work waiting to be done that can help us here.
I don’t know if this is just hindsight, but tracr has in no way turned out to be safety relevant. Was it considered to be so at the time of commenting?