I wonder if the approach from your paper is in some sense too conservative to evaluate whether information has been removed: Suppose I used some magical scalpel and removed all information about Harry Potter from the model.
Then I wouldn’t be too surprised if this leaves a giant HP-shaped hole in the model such that, if you then fine-tune on a small amount of HP-related data, suddenly everything falls into place and makes sense to the model again, and this rapidly generalizes.
Maybe fine-tuning robust unlearning requires us to fill in the holes with synthetic data so that this doesn’t happen.
I am not sure that it is over-conservative. If you have an HP-shaped that can easily be transformed in HP-data using fine-tuning, does it give you a high level of confidence that people misusing the model won’t be able to extract the information from the HP-shaped hole or that a misaligned model won’t be able to notice to HP-shaped hole and use that to answer to question to HP when it really wants to?
I think that it depends on the specifics of how you built the HP-shaped hole (without scrambling the information). I don’t have a good intuition for what a good technique like that could look like. A naive thing that comes to mind would be something like “replace all facts in HP by their opposite” (if you had a magic fact-editing tool), but I feel like in this situation it would be pretty easy for an attacker (human misuse or misaligned model) to notice “wow all HP knowledge has been replaced by anti-HP knowledge” and then extract all the HP information by just swapping the answers.
I wonder if the approach from your paper is in some sense too conservative to evaluate whether information has been removed: Suppose I used some magical scalpel and removed all information about Harry Potter from the model.
Then I wouldn’t be too surprised if this leaves a giant HP-shaped hole in the model such that, if you then fine-tune on a small amount of HP-related data, suddenly everything falls into place and makes sense to the model again, and this rapidly generalizes.
Maybe fine-tuning robust unlearning requires us to fill in the holes with synthetic data so that this doesn’t happen.
I am not sure that it is over-conservative. If you have an HP-shaped that can easily be transformed in HP-data using fine-tuning, does it give you a high level of confidence that people misusing the model won’t be able to extract the information from the HP-shaped hole or that a misaligned model won’t be able to notice to HP-shaped hole and use that to answer to question to HP when it really wants to?
I think that it depends on the specifics of how you built the HP-shaped hole (without scrambling the information). I don’t have a good intuition for what a good technique like that could look like. A naive thing that comes to mind would be something like “replace all facts in HP by their opposite” (if you had a magic fact-editing tool), but I feel like in this situation it would be pretty easy for an attacker (human misuse or misaligned model) to notice “wow all HP knowledge has been replaced by anti-HP knowledge” and then extract all the HP information by just swapping the answers.