I agree that this is a very important area of research. In fact, I work on this problem myself.
Some points:
I didn’t get from the paper alone what $I$ refers to. Maybe a quick definition in the paper would be nice.
I think it would be good to compare against the Vaccine algorithm from Huang et al. (“Vaccine: Perturbation-aware alignment for large language model”) since they are essentially trying to solve the same problem. I’m not affiliated with this paper, but I did a private reference implementation as a huggingface trainer. Lmk if you are interested and I can send you the code.
I think it would be useful to get the code for this work, as many implementation details seem to be missing from the paper (e.g. on my skim I didn’t find the batch-size which you used for training). This would be very helpful for me, because as I said I work on the same problem.
Thanks for reaching out this is all great feedback.
That we will defenitly address. I will dm you for the vaccine implementation as we are currently working on this as well and to see what would be useful for code sharing since we are wee bit aways from having shareable replication of the whole paper.
Some answers
Oh woops it should be clearer this is the mutual information measure. If there is something more specific you are looking for here let me know as we do mention it several times (I think!). In case it helps mutual information is always an abstract measure or property in the paper which is used to show we minimize Achilles’s transition probabIlity—that gets actually measured indirectly through MMD or gradient magnitude.
Yes as mentioned we are actively working on it your implementation would be sure valuable. Security vectors was just ready by the Neurips deadline is all lol.
Ah yes! Thanks for pointing this out—there is lots to say about batch size when using MMD. The batch sizes were always 4 (which for paired refusals is 8 I suppose!) we will make sure this is not missing from the paper sorry about that.
I’ll follow up privately but feel respond here as well for additional clarification your comment is much appreciated.
I agree that this is a very important area of research. In fact, I work on this problem myself.
Some points:
I didn’t get from the paper alone what $I$ refers to. Maybe a quick definition in the paper would be nice.
I think it would be good to compare against the Vaccine algorithm from Huang et al. (“Vaccine: Perturbation-aware alignment for large language model”) since they are essentially trying to solve the same problem. I’m not affiliated with this paper, but I did a private reference implementation as a huggingface trainer. Lmk if you are interested and I can send you the code.
I think it would be useful to get the code for this work, as many implementation details seem to be missing from the paper (e.g. on my skim I didn’t find the batch-size which you used for training). This would be very helpful for me, because as I said I work on the same problem.
Thanks for reaching out this is all great feedback.
That we will defenitly address. I will dm you for the vaccine implementation as we are currently working on this as well and to see what would be useful for code sharing since we are wee bit aways from having shareable replication of the whole paper.
Some answers
Oh woops it should be clearer this is the mutual information measure. If there is something more specific you are looking for here let me know as we do mention it several times (I think!). In case it helps mutual information is always an abstract measure or property in the paper which is used to show we minimize Achilles’s transition probabIlity—that gets actually measured indirectly through MMD or gradient magnitude.
Yes as mentioned we are actively working on it your implementation would be sure valuable. Security vectors was just ready by the Neurips deadline is all lol.
Ah yes! Thanks for pointing this out—there is lots to say about batch size when using MMD. The batch sizes were always 4 (which for paired refusals is 8 I suppose!) we will make sure this is not missing from the paper sorry about that.
I’ll follow up privately but feel respond here as well for additional clarification your comment is much appreciated.