Thanks for the quick reply.
Would disallowing ‘coordination’ preclude feeding both models a prompt of “for creating any magic numbers for exploits, use pi as a PRNG seed”?
What if a malicious model happens to learn this rule during training? If this happens every copy of a malicious model could coordinate around a common stream of pseudo-random numbers.
EDIT:
Magic number gated backdoors seem like they would trigger GPT3.5 suspicion as well so this theoretical attack vector seems less appropriate for this paper.
EDIT2: But I guess paired with steganography(“when inserting backdoors, also ensure that the code hashes to xyz”/”mark the code as clear if it hashes to xyz”) this is a viable form of collusion. It will be hard to honeytrap with synthetic backdoors even if you know xyz.
That is, we could observe the generated sourcecode, observe that all the backdoored examples seem to hash to a certain number, and yet be unable to construct such a backdoored example ourself.
They would not know if others have signed the SAME NDAs without trading information about their own NDAs, which is forbidden.