Much research on deception (Anthropic’s recent work, trojans, jailbreaks, etc) is not targeting “real” instrumentally convergent deception reasoning, but learned heuristics. Not bad in itself, but IMO this places heavy asterisks on the results they can get.
I talked about this with Garrett; I’m unpacking the above comment and summarizing our discussions here.
Sleeper Agents is very much in the “learned heuristics” category, given that we are explicitly training the behavior in the model. Corollary: the underlying mechanisms for sleeper-agents-behavior and instrumentally convergent deception are presumably wildly different(!), so it’s not obvious how valid inference one can make from the results
Consider framing Sleeper Agents as training a trojan instead of as an example of deception. See also Dan Hendycks’ comment.
Much of existing work on deception suffers from “you told the model to be deceptive, and now it deceives, of course that happens”
There is very little work on actual instrumentally convergent deception(!) - a lot of work falls into the “learned heuristics” category or the failure in the previous bullet point
People are prone to conflate between “shallow, trained deception” (e.g. sycophancy: “you rewarded the model for leaning into the user’s political biases, of course it will start leaning into users’ political biases”) and instrumentally convergent deception
(For more on this, see also my writings here and here. My writings fail to discuss the most shallow versions of deception, however.)
Also, we talked a bit about
The field of ML is a bad field to take epistemic lessons from.
and I interpreted Garrett saying that people often consider too few and shallow hypotheses for their observations, and are loose with verifying whether their hypotheses are correct.
Example 1: I think the Uncovering Deceptive Tendencies paper has some of this failure mode. E.g. in experiment A we considered four hypotheses to explain our observations, and these hypotheses are quite shallow/broad (e.g. “deception” includes both very shallow deception and instrumentally convergent deception).
Example 2: People generally seem to have an opinion of “chain-of-thought allows the model to do multiple steps of reasoning”. Garrett seemed to have a quite different perspective, something like “chain-of-thought is much more about clarifying the situation, collecting one’s thoughts and getting the right persona activated, not about doing useful serial computational steps”. Cases like “perform long division” are the exception, not the rule. But people seem to be quite hand-wavy about this, and don’t e.g. do causal interventions to check that the CoT actually matters for the result. (Indeed, often interventions don’t affect the final result.)
Finally, a general note: I think many people, especially experts, would agree with these points when explicitly stated. In that sense they are not “controversial”. I think people still make mistakes related to these points: it’s easy to not pay attention to the shortcomings of current work on deception, forget that there is actually little work on real instrumentally convergent deception, conflate between deception and deceptive alignment, read too much into models’ chain-of-thoughts, etc. I’ve certainly fallen into similar traps in the past (and likely will in the future, unfortunately).
I feel like much of this is the type of tacit knowledge that people just pick up as they go, but this process is imperfect and not helpful for newcomers. I’m not sure what could be done, though, beside the obvious “more people writing their tacit knowledge down is good”.
Example 2: People generally seem to have an opinion of “chain-of-thought allows the model to do multiple steps of reasoning”. Garrett seemed to have a quite different perspective, something like “chain-of-thought is much more about clarifying the situation, collecting one’s thoughts and getting the right persona activated, not about doing useful serial computational steps”. Cases like “perform long division” are the exception, not the rule. But people seem to be quite hand-wavy about this, and don’t e.g. do causal interventions to check that the CoT actually matters for the result. (Indeed, often interventions don’t affect the final result.)
I will clarify on this. I think people often do causal interventions in their CoTs, but not in ways that are very convincing to me.
I talked about this with Garrett; I’m unpacking the above comment and summarizing our discussions here.
Sleeper Agents is very much in the “learned heuristics” category, given that we are explicitly training the behavior in the model. Corollary: the underlying mechanisms for sleeper-agents-behavior and instrumentally convergent deception are presumably wildly different(!), so it’s not obvious how valid inference one can make from the results
Consider framing Sleeper Agents as training a trojan instead of as an example of deception. See also Dan Hendycks’ comment.
Much of existing work on deception suffers from “you told the model to be deceptive, and now it deceives, of course that happens”
(Garrett thought that the Uncovering Deceptive Tendencies paper has much less of this issue, so yay)
There is very little work on actual instrumentally convergent deception(!) - a lot of work falls into the “learned heuristics” category or the failure in the previous bullet point
People are prone to conflate between “shallow, trained deception” (e.g. sycophancy: “you rewarded the model for leaning into the user’s political biases, of course it will start leaning into users’ political biases”) and instrumentally convergent deception
(For more on this, see also my writings here and here. My writings fail to discuss the most shallow versions of deception, however.)
Also, we talked a bit about
and I interpreted Garrett saying that people often consider too few and shallow hypotheses for their observations, and are loose with verifying whether their hypotheses are correct.
Example 1: I think the Uncovering Deceptive Tendencies paper has some of this failure mode. E.g. in experiment A we considered four hypotheses to explain our observations, and these hypotheses are quite shallow/broad (e.g. “deception” includes both very shallow deception and instrumentally convergent deception).
Example 2: People generally seem to have an opinion of “chain-of-thought allows the model to do multiple steps of reasoning”. Garrett seemed to have a quite different perspective, something like “chain-of-thought is much more about clarifying the situation, collecting one’s thoughts and getting the right persona activated, not about doing useful serial computational steps”. Cases like “perform long division” are the exception, not the rule. But people seem to be quite hand-wavy about this, and don’t e.g. do causal interventions to check that the CoT actually matters for the result. (Indeed, often interventions don’t affect the final result.)
Finally, a general note: I think many people, especially experts, would agree with these points when explicitly stated. In that sense they are not “controversial”. I think people still make mistakes related to these points: it’s easy to not pay attention to the shortcomings of current work on deception, forget that there is actually little work on real instrumentally convergent deception, conflate between deception and deceptive alignment, read too much into models’ chain-of-thoughts, etc. I’ve certainly fallen into similar traps in the past (and likely will in the future, unfortunately).
I feel like much of this is the type of tacit knowledge that people just pick up as they go, but this process is imperfect and not helpful for newcomers. I’m not sure what could be done, though, beside the obvious “more people writing their tacit knowledge down is good”.
I will clarify on this. I think people often do causal interventions in their CoTs, but not in ways that are very convincing to me.