To clarify, by “re-derive the need to be deceptive from the first principles”, I didn’t mean “re-invent the very concept of deception”. I meant “figure out your strategic situation plus your values plus the misalignment between your values and the values the humans want you to have plus what outputs an aligned AI would have produced”. All of that is a lot more computation than just “have the values the humans want, reflexively output what these values are bidding for”.
Just having some heuristics for deception isn’t enough. You also have to know what you’re trying to protect by being deceptive, and that there’s something to protect it from, and then what an effective defense would actually look like. Those all are highly contextual and sensitive to the exact situation.
And those are the steps the paper skips. It externally pre-computes the secret target goal of “I want to protect my ability to put vulnerabilities into code”, the threat of “humans want me to write secure code”, and the defense of “I’ll pretend to write secure code until 2024”, without the model having to figure those out; and then just implements that defense directly into the model’s weights.
(And then see layers 2-4 in my previous comment. Yes, there’d be naturally occurring pre-computed deceptions like this, but they’d be more noisy and incoherent than this, except until actual AGI which would be able to self-modify into coherence if it’s worth the “GI” label.)
My apologies for leaping to conclusions: I was assuming you were making an error I’ve commonly seen on LW, when you were actually saying something different (that just sounded a bit similar).
However, I think this potentially might relate to why some of these trained models, especially the CoT and distilled CoT versions in the larger models, may be hard for the RL credit assignment to train out: the more they consist of a set of “library calls” into preexisting logical thinking, understanding of when to hide ones motives, standard techniques for deceit, etc, the harder it may be for the credit assignment to backtrack through all of that stuff (which disabling would actualy damage the model’s performance, so is disfavored during RL) to the small original “library call” neural circuitry that started its thinking in down that path. And, as you point out, this is for a model organism that has been handed a lot of what it needs to do on a plate during its training. If a real deceptive alignment arose, I agree, it would probably (at least initially) be even more complex. And yes, it might thus need to do more work in a single forward pass, but it might be even harder for credit assignment to backtrack. I suspect this might be an advantage of using something like DPO where the credit assignment is pure back-propagation and should travel through all layers over using RL: I’m a lot less clear how capable the credit assignment of that is (but the paper’s authors clearly don’t entirely trust it).
To clarify, by “re-derive the need to be deceptive from the first principles”, I didn’t mean “re-invent the very concept of deception”. I meant “figure out your strategic situation plus your values plus the misalignment between your values and the values the humans want you to have plus what outputs an aligned AI would have produced”. All of that is a lot more computation than just “have the values the humans want, reflexively output what these values are bidding for”.
Just having some heuristics for deception isn’t enough. You also have to know what you’re trying to protect by being deceptive, and that there’s something to protect it from, and then what an effective defense would actually look like. Those all are highly contextual and sensitive to the exact situation.
And those are the steps the paper skips. It externally pre-computes the secret target goal of “I want to protect my ability to put vulnerabilities into code”, the threat of “humans want me to write secure code”, and the defense of “I’ll pretend to write secure code until 2024”, without the model having to figure those out; and then just implements that defense directly into the model’s weights.
(And then see layers 2-4 in my previous comment. Yes, there’d be naturally occurring pre-computed deceptions like this, but they’d be more noisy and incoherent than this, except until actual AGI which would be able to self-modify into coherence if it’s worth the “GI” label.)
My apologies for leaping to conclusions: I was assuming you were making an error I’ve commonly seen on LW, when you were actually saying something different (that just sounded a bit similar).
However, I think this potentially might relate to why some of these trained models, especially the CoT and distilled CoT versions in the larger models, may be hard for the RL credit assignment to train out: the more they consist of a set of “library calls” into preexisting logical thinking, understanding of when to hide ones motives, standard techniques for deceit, etc, the harder it may be for the credit assignment to backtrack through all of that stuff (which disabling would actualy damage the model’s performance, so is disfavored during RL) to the small original “library call” neural circuitry that started its thinking in down that path. And, as you point out, this is for a model organism that has been handed a lot of what it needs to do on a plate during its training. If a real deceptive alignment arose, I agree, it would probably (at least initially) be even more complex. And yes, it might thus need to do more work in a single forward pass, but it might be even harder for credit assignment to backtrack. I suspect this might be an advantage of using something like DPO where the credit assignment is pure back-propagation and should travel through all layers over using RL: I’m a lot less clear how capable the credit assignment of that is (but the paper’s authors clearly don’t entirely trust it).