Another mitigation strategy against the “LLM learns steganography from other LLM’s output” problem is to add some noise to the training data (performing a number of transformations that preserve meaning) prior to training, to break the embedded steganography schemes. I feel like this should be relatively easy and effective?
Note that the loss would obviously be worse on the outputs of the previous LLM.
Which makes me think that comparing losses could be a way to detect that some text contains steganography, even if we can’t pinpoint how exactly (but our transformations break it).
EDIT: in case of disagreement, I can try being more specific about my claim that this should be relatively easy and effective, using concrete examples. Basically I believe that after scrambling, there can only remain a small number of bits of steganographic information per token, even for a steganographic encoding that originally used the null space of natural language optimally.
Another mitigation strategy against the “LLM learns steganography from other LLM’s output” problem is to add some noise to the training data (performing a number of transformations that preserve meaning) prior to training, to break the embedded steganography schemes. I feel like this should be relatively easy and effective?
Note that the loss would obviously be worse on the outputs of the previous LLM.
Which makes me think that comparing losses could be a way to detect that some text contains steganography, even if we can’t pinpoint how exactly (but our transformations break it).
EDIT: in case of disagreement, I can try being more specific about my claim that this should be relatively easy and effective, using concrete examples. Basically I believe that after scrambling, there can only remain a small number of bits of steganographic information per token, even for a steganographic encoding that originally used the null space of natural language optimally.