I’m confused about why we would expect the misaligned behavior to be displayed in the Shoggoth rather than the Face. For example, let’s take the setting from the AI control paper, where a misaligned model is trying to insert a backdoor in a coding question, despite it passing a series of unit tests. This could be incentivized in the training loop if e.g. the reward supervision is weak and the backdoor grants the model access to additional resources/compute. I can see a few strategies succeeding:
The Shoggoth speculates on different ways to insert a backdoor, and the Face writes a solution using one of these backdoors.
The Shoggoth outputs the correct solution to the coding question, and the Face modifies it to insert a backdoor.
Some combination of 1) and 2), for instance the Shoggoth writes several versions of the function, one of which contains a backdoor, and the Face returns the version with the backdoor.
All of these result in the Face outputting potentially same backdoored function, and should be equally rewarded by the reward model. Why should we expect that 2) is less likely than 1) or 3)?
In my view, the misaligned behavior is split across the shoggoth and face in most cases. In the case which Daniel K anticipates, the Shoggoth is misaligned in the sense of explicitly (but honestly) plotting to deceive, while the face is misaligned in the sense of dishonestly concealing.
So I am interpreting you as asking why we might expect that specific division of responsibilities, rather than some other divisions such as those you mention.
One reason is that we already see this empirically in the o1 report.
However, to me, the distinction between your case 1 and 2 is not so important. Both scenarios you propose preserve the important thing: there is a mismatch between an honest CoT and a dishonest final answer, which would allow the deceptive final answer to be detected.
There could be other cases where this doesn’t happen; this method is not a guaranteed lie detector. To me, the point is that it will at least sometimes happen. A lie detector that only sometimes works is better than no lie detector at all.
If you give a kid a diary and try to avoid giving them any reason to think you’re reading it, then the kid might write honestly in the diary and reveal some deceptions you wouldn’t previously have known about. The kid might also do any number of other things. However, setting aside the ethics of snooping on your kid and lying to them about it, it’s better than nothing.
One reason is that the Shoggoth will be able to use lots of CoT to think, whereas the Face never gets to do that—it just has to see what the Shoggoth did and then write an output. so the Face should be bad at reasoning/thinking/System2. This doesn’t by itself prevent the Face from being able to modify the correct solution to insert a backdoor, but it helps—modifying a correct solution to insert a backdoor is sometimes trivial but often difficult, difficult in the ‘you have to think about it for a bit’ sense, and in those cases the Face just won’t be able to do it.
Another reason is that the shoggoth and face will often need to coordinate to do the misaligned thing. E.g. even if the face is adept at inserting vulnerabilities, some proposed correct solutions will be easier to modify to insert vulnerabilities than others, and so both shoggoth and face will get higher scores if the shoggoth steers towards the solutions that are easier to modify to insert vulnerabilities. Moreover, by default it’s easier for the face if the shoggoth is transparent about this—if it says e.g. ‘OK here’s a draft solution but this seems like it might be difficult to modify to insert vulnerabilities, hold on let me think for a bit, … ok try this one instead, it has easily-modifiable-feature-X’ because that helps the Face do its job.
...I probably have other reasons which are just intuitions and hard to articulate right now, but thanks for the nudge, I’ll try to think about it more.
One thing that could happen is that the Shoggoth learns to do 100% and the Face just copy-pastes. Like, the Shoggoth writes a CoT that includes a ‘final answer’ and the Face becomes the extremely dumb function ‘take what’s labelled as final answer and regurgitate it.’ However, I don’t think this will happen, because it seems computationally inefficient. Seems like the more efficient thing would be for the Shoggoth to do all the reasoning up to the final answer and then stop, and then the Face to read all of that and synthesize the final answer.
I’m confused about why we would expect the misaligned behavior to be displayed in the Shoggoth rather than the Face. For example, let’s take the setting from the AI control paper, where a misaligned model is trying to insert a backdoor in a coding question, despite it passing a series of unit tests. This could be incentivized in the training loop if e.g. the reward supervision is weak and the backdoor grants the model access to additional resources/compute. I can see a few strategies succeeding:
The Shoggoth speculates on different ways to insert a backdoor, and the Face writes a solution using one of these backdoors.
The Shoggoth outputs the correct solution to the coding question, and the Face modifies it to insert a backdoor.
Some combination of 1) and 2), for instance the Shoggoth writes several versions of the function, one of which contains a backdoor, and the Face returns the version with the backdoor.
All of these result in the Face outputting potentially same backdoored function, and should be equally rewarded by the reward model. Why should we expect that 2) is less likely than 1) or 3)?
In my view, the misaligned behavior is split across the shoggoth and face in most cases. In the case which Daniel K anticipates, the Shoggoth is misaligned in the sense of explicitly (but honestly) plotting to deceive, while the face is misaligned in the sense of dishonestly concealing.
So I am interpreting you as asking why we might expect that specific division of responsibilities, rather than some other divisions such as those you mention.
One reason is that we already see this empirically in the o1 report.
However, to me, the distinction between your case 1 and 2 is not so important. Both scenarios you propose preserve the important thing: there is a mismatch between an honest CoT and a dishonest final answer, which would allow the deceptive final answer to be detected.
There could be other cases where this doesn’t happen; this method is not a guaranteed lie detector. To me, the point is that it will at least sometimes happen. A lie detector that only sometimes works is better than no lie detector at all.
If you give a kid a diary and try to avoid giving them any reason to think you’re reading it, then the kid might write honestly in the diary and reveal some deceptions you wouldn’t previously have known about. The kid might also do any number of other things. However, setting aside the ethics of snooping on your kid and lying to them about it, it’s better than nothing.
One reason is that the Shoggoth will be able to use lots of CoT to think, whereas the Face never gets to do that—it just has to see what the Shoggoth did and then write an output. so the Face should be bad at reasoning/thinking/System2. This doesn’t by itself prevent the Face from being able to modify the correct solution to insert a backdoor, but it helps—modifying a correct solution to insert a backdoor is sometimes trivial but often difficult, difficult in the ‘you have to think about it for a bit’ sense, and in those cases the Face just won’t be able to do it.
Another reason is that the shoggoth and face will often need to coordinate to do the misaligned thing. E.g. even if the face is adept at inserting vulnerabilities, some proposed correct solutions will be easier to modify to insert vulnerabilities than others, and so both shoggoth and face will get higher scores if the shoggoth steers towards the solutions that are easier to modify to insert vulnerabilities. Moreover, by default it’s easier for the face if the shoggoth is transparent about this—if it says e.g. ‘OK here’s a draft solution but this seems like it might be difficult to modify to insert vulnerabilities, hold on let me think for a bit, … ok try this one instead, it has easily-modifiable-feature-X’ because that helps the Face do its job.
...I probably have other reasons which are just intuitions and hard to articulate right now, but thanks for the nudge, I’ll try to think about it more.
One thing that could happen is that the Shoggoth learns to do 100% and the Face just copy-pastes. Like, the Shoggoth writes a CoT that includes a ‘final answer’ and the Face becomes the extremely dumb function ‘take what’s labelled as final answer and regurgitate it.’ However, I don’t think this will happen, because it seems computationally inefficient. Seems like the more efficient thing would be for the Shoggoth to do all the reasoning up to the final answer and then stop, and then the Face to read all of that and synthesize the final answer.