I agree that if the model constructing the backdoor is allowed to coordinate on secrets with the model that inserted the backdoor, you can do this sort of trickery (by coordinating on a secret for this sort of scheme).
In practice, we didn’t model this sort of case as it didn’t seem like a very important dynamic (for the reason you describe around suspicion).
If we don’t allow any coordination on a shared secret and the model constructing the backdoor and H are identical (as in our paper), then these should operate identically (up to non-determinism).
And we simplify to just fixing the behavior to be exactly identical. (Effectively assuming the exact same prompts and the exact same non-determinism.)
Would disallowing ‘coordination’ preclude feeding both models a prompt of “for creating any magic numbers for exploits, use pi as a PRNG seed”?
What if a malicious model happens to learn this rule during training? If this happens every copy of a malicious model could coordinate around a common stream of pseudo-random numbers.
EDIT: Magic number gated backdoors seem like they would trigger GPT3.5 suspicion as well so this theoretical attack vector seems less appropriate for this paper.
EDIT2: But I guess paired with steganography(“when inserting backdoors, also ensure that the code hashes to xyz”/”mark the code as clear if it hashes to xyz”) this is a viable form of collusion. It will be hard to honeytrap with synthetic backdoors even if you know xyz.
That is, we could observe the generated sourcecode, observe that all the backdoored examples seem to hash to a certain number, and yet be unable to construct such a backdoored example ourself.
I agree that if the model constructing the backdoor is allowed to coordinate on secrets with the model that inserted the backdoor, you can do this sort of trickery (by coordinating on a secret for this sort of scheme).
In practice, we didn’t model this sort of case as it didn’t seem like a very important dynamic (for the reason you describe around suspicion).
If we don’t allow any coordination on a shared secret and the model constructing the backdoor and H are identical (as in our paper), then these should operate identically (up to non-determinism).
And we simplify to just fixing the behavior to be exactly identical. (Effectively assuming the exact same prompts and the exact same non-determinism.)
Thanks for the objection.
Thanks for the quick reply.
Would disallowing ‘coordination’ preclude feeding both models a prompt of “for creating any magic numbers for exploits, use pi as a PRNG seed”?
What if a malicious model happens to learn this rule during training? If this happens every copy of a malicious model could coordinate around a common stream of pseudo-random numbers.
EDIT:
Magic number gated backdoors seem like they would trigger GPT3.5 suspicion as well so this theoretical attack vector seems less appropriate for this paper.
EDIT2: But I guess paired with steganography(“when inserting backdoors, also ensure that the code hashes to xyz”/”mark the code as clear if it hashes to xyz”) this is a viable form of collusion. It will be hard to honeytrap with synthetic backdoors even if you know xyz.
That is, we could observe the generated sourcecode, observe that all the backdoored examples seem to hash to a certain number, and yet be unable to construct such a backdoored example ourself.