There doesn’t need to be a deception module or a deception neuron that can be detected
I agree with this. Perhaps I’m missing some context; is it common to advocate for the existence of a “deception module”? I’m aware of some interpretability work that looks for a “truthiness” neuron but that doesn’t seem like the same concept.
We would need an interpretability tool that can say “this agent has an inaccurate world model and also these inaccuracies systematically cause it to be deceptive” without having to simulate the agent interacting with the world. I don’t think that is impossible, but it is probably very hard, much harder than finding a “deception” neuron.
Right, I was gesturing towards the sort of interpretability where we inspect the agent’s world model (particularly its future predictions) and determine if it matches the agent’s stated goals. (Though I’m not sure why we can’t simulate the agent’s interactions? It seems that running an agent in a simulation is one of the better ways of detecting how it would behave in hypothetical future scenarios that the agent is trying to realize.) I suspect we agree substantively and it’s mostly a question of semantics especially around what “deception” means. I’m not looking for a deception module, rather, I want to observe the thoughts and ruminations going on as an agent performs tasks and detect deceptive functional patterns.
So for example in the ELK paper, is the AI constructing a plan to steal the diamond and fool the security cameras? I believe deceptive ruminations would be detectable; if you could see the contents of the thoughts you’d see a world model with the agent e.g. stealing the diamond, a meta-process evaluating which of many cunning plans is most likely to succeed, and presumably except in pathological cases, somewhere a value function / target world state that is trying to be optimized (eg “I have the diamond”), and this internal world differing from the external claims (perhaps even some processes modeling the interlocutor and predicting what deceptive story would be most effective). These must all be in there somewhere, and therefore be interpretable.
Or perhaps if Cicero is not stretching the analogy too far (I don’t think it has “thoughts”), is Cicero evaluating future juicy backstabs and guiding the game to them, or myopically evaluating short term moves then backstabbing when it’s optimal? This is a question we should be able to answer one way or another with good enough interpretability tools.
I think you are concerned more with something like “unintentional deception”, which I think is quite different, and as you say comes from a lack of self-knowledge or inaccurate self-predicting. I think there is also a semantic grey area here, is everyone who is mistaken about their future behavior or even just claiming X when !X obtains unintentionally deceiving?
Semantics aside, I agree this unintentional case could be harder to detect, and raw interpretability doesn’t automatically solve it.
I think this points to an interesting dynamic—I suspect that the more capable at self-modeling an agent gets, the more likely any deception is to be intentional. To your concerns about LLMs, this seems to be mostly a problem of inadequate self-knowledge (or perhaps more fundamentally an artifact of their lack of a sense of self).
Unintentional deception would require the agent to in some sense fail at modeling itself. There are cases where this could occur even in a superintelligence (since you can never model yourself perfectly due to infinite recursion of modeling an agent that contains the same model) but it seems the problem with Cicero is just that it has a very weak self-model (if it even has one), and improving that self-model could be expected to remove the unintentional deception (replacing it with intentional deception, since deception of some sort is required to play Diplomacy optimally).
For example, the case EY uses as an example “the AGI doesn’t care about you, it just wants your atoms for something else”, do you see more risk from unintended deception (sharp left turn of an agent that didn’t know it would defect if given enough power, perhaps) vs intended deception (agent simply lies about being aligned and manipulates to get power)?
The most tricky cases would be where a normally-honest/good agent that has a strong self-model and sense of ethics can fail to model itself in some specific situation and accidentally deceive (potentially even against its own values, self-image, and interests — humans do this of course). Manchurian Candidate type triggers for example. But note this is quite convoluted; the stronger the sense of self, morality, and world-model, the better able and incentivized the agent is to avoid deceiving itself.
Another interesting question—if an unintentionally deceptive agent is bad at modeling its future self, is it un-threatening in a complex environment like the physical world? Sure, Cicero can defeat humans at Diplomacy with its myopic world model, but in the real world I suspect being that myopic would lead to your bank account being immediately drained by hackers and scammers. Certainly you’d struggle to form partnerships with other agents if you end up stabbing them at the first opportunity. It’s all relative but I view one game of Diplomacy as entailing short-term cooperation (merely over a few hours), what about cooperating over months or years to build a company or other venture?
The question (an empirical one) is whether unintentional deception can persist broadly as agents and the environment get more sophisticated, or whether it gets selected against (and therefore bounded) by the level of environmental complexity and adversarial competition. Charlie Munger would put forth that cognitive bias provides smart self-aware investors a substantial edge over naive self-myopic investors; extrapolate that forward and perhaps there isn’t much room for biased/erroneous word modeling if you seek to compete with other superintelligences, or even the smartest humans.
On to some object-level questions about your Cicero points:
When it sincerely says “I won’t backstab you in situation X”, but when it is actually put in situation X it backstabs
Is this actually confirmed? Does Cicero actually claim “I won’t stab you in <hypothetical scenario>”? Or does it just honestly report its current tactical/strategic goals, which later change? (“Will you support me in attacking here” means I plan to attack here, not that I want you to act as if I was attacking here, and I plan to attack there and stab you instead.) I was as a under the impression it’s the latter, it’s just honestly reporting it’s plan.
Cicero models the world but with unrealistically cooperative predictions of its future behavior
Do we actually know this? I haven’t seen any sign that it’s modeling it’s own verbal behavior or the consequences of its communications (admittedly I haven’t gone deep on the architecture, would be interested to learn more). IIUC it has a strategic model which is evaluating game positions and messages, and a separate (downstream) LLM which could be thought of as an epiphenomenon of the actual strategic reasoning. I don’t see any concrete proof that the strategic planner is modeling the impact of its communication on other players (ie “if I tell Bob X he might do Y which would lead to favorable strategic position P”). And the LM is more like a press secretary—all it gets is The Current Plan, not the plan B or any hint of the juicy backstab that may or may not have been evaluated. It seems to me there is a weird split brain here that deeply impairs its ability to model itself, and so I am skeptical that it is actually meaningfully doing so.
So to summarize I think there are open questions about the object level properties of Cicero, which could be answered by better interpretability tools.
It’s possible that Cicero is simply an honest but myopic opportunist. In which case, still not trustworthy. (As if we were ever going to trust such an obviously unsafe-by-construction entity with real power).
It’s possible that the strategic model is deceptive, and planning backstabs, planning to mislead others, and maybe even explicitly modeling its press secretary to provide the most-spinable plans. I doubt this as it requires a lot more complexity and world modeling.
But I believe we could answer where Cicero is on the spectrum between these two by actually observing the world models being generated when it evaluates a turn. So in this case, inasmuch as there is something similar to an inner alignment issue, it is fully detectable with adequate (strong) interpretability of the inner workings. (Again though, Cicero’s value function is so comically unviable as a basis for an AGI that I struggle to infer too much about inner alignnment. We should be wary of these problems when building a Skynet military planner, I suppose, or any agent that needs to model extremely adversarial opponents and outcomes.)
I agree with this. Perhaps I’m missing some context; is it common to advocate for the existence of a “deception module”? I’m aware of some interpretability work that looks for a “truthiness” neuron but that doesn’t seem like the same concept.
Right, I was gesturing towards the sort of interpretability where we inspect the agent’s world model (particularly its future predictions) and determine if it matches the agent’s stated goals. (Though I’m not sure why we can’t simulate the agent’s interactions? It seems that running an agent in a simulation is one of the better ways of detecting how it would behave in hypothetical future scenarios that the agent is trying to realize.) I suspect we agree substantively and it’s mostly a question of semantics especially around what “deception” means. I’m not looking for a deception module, rather, I want to observe the thoughts and ruminations going on as an agent performs tasks and detect deceptive functional patterns.
So for example in the ELK paper, is the AI constructing a plan to steal the diamond and fool the security cameras? I believe deceptive ruminations would be detectable; if you could see the contents of the thoughts you’d see a world model with the agent e.g. stealing the diamond, a meta-process evaluating which of many cunning plans is most likely to succeed, and presumably except in pathological cases, somewhere a value function / target world state that is trying to be optimized (eg “I have the diamond”), and this internal world differing from the external claims (perhaps even some processes modeling the interlocutor and predicting what deceptive story would be most effective). These must all be in there somewhere, and therefore be interpretable.
Or perhaps if Cicero is not stretching the analogy too far (I don’t think it has “thoughts”), is Cicero evaluating future juicy backstabs and guiding the game to them, or myopically evaluating short term moves then backstabbing when it’s optimal? This is a question we should be able to answer one way or another with good enough interpretability tools.
I think you are concerned more with something like “unintentional deception”, which I think is quite different, and as you say comes from a lack of self-knowledge or inaccurate self-predicting. I think there is also a semantic grey area here, is everyone who is mistaken about their future behavior or even just claiming X when !X obtains unintentionally deceiving?
Semantics aside, I agree this unintentional case could be harder to detect, and raw interpretability doesn’t automatically solve it.
I think this points to an interesting dynamic—I suspect that the more capable at self-modeling an agent gets, the more likely any deception is to be intentional. To your concerns about LLMs, this seems to be mostly a problem of inadequate self-knowledge (or perhaps more fundamentally an artifact of their lack of a sense of self).
Unintentional deception would require the agent to in some sense fail at modeling itself. There are cases where this could occur even in a superintelligence (since you can never model yourself perfectly due to infinite recursion of modeling an agent that contains the same model) but it seems the problem with Cicero is just that it has a very weak self-model (if it even has one), and improving that self-model could be expected to remove the unintentional deception (replacing it with intentional deception, since deception of some sort is required to play Diplomacy optimally).
For example, the case EY uses as an example “the AGI doesn’t care about you, it just wants your atoms for something else”, do you see more risk from unintended deception (sharp left turn of an agent that didn’t know it would defect if given enough power, perhaps) vs intended deception (agent simply lies about being aligned and manipulates to get power)?
The most tricky cases would be where a normally-honest/good agent that has a strong self-model and sense of ethics can fail to model itself in some specific situation and accidentally deceive (potentially even against its own values, self-image, and interests — humans do this of course). Manchurian Candidate type triggers for example. But note this is quite convoluted; the stronger the sense of self, morality, and world-model, the better able and incentivized the agent is to avoid deceiving itself.
Another interesting question—if an unintentionally deceptive agent is bad at modeling its future self, is it un-threatening in a complex environment like the physical world? Sure, Cicero can defeat humans at Diplomacy with its myopic world model, but in the real world I suspect being that myopic would lead to your bank account being immediately drained by hackers and scammers. Certainly you’d struggle to form partnerships with other agents if you end up stabbing them at the first opportunity. It’s all relative but I view one game of Diplomacy as entailing short-term cooperation (merely over a few hours), what about cooperating over months or years to build a company or other venture?
The question (an empirical one) is whether unintentional deception can persist broadly as agents and the environment get more sophisticated, or whether it gets selected against (and therefore bounded) by the level of environmental complexity and adversarial competition. Charlie Munger would put forth that cognitive bias provides smart self-aware investors a substantial edge over naive self-myopic investors; extrapolate that forward and perhaps there isn’t much room for biased/erroneous word modeling if you seek to compete with other superintelligences, or even the smartest humans.
On to some object-level questions about your Cicero points:
Is this actually confirmed? Does Cicero actually claim “I won’t stab you in <hypothetical scenario>”? Or does it just honestly report its current tactical/strategic goals, which later change? (“Will you support me in attacking here” means I plan to attack here, not that I want you to act as if I was attacking here, and I plan to attack there and stab you instead.) I was as a under the impression it’s the latter, it’s just honestly reporting it’s plan.
Do we actually know this? I haven’t seen any sign that it’s modeling it’s own verbal behavior or the consequences of its communications (admittedly I haven’t gone deep on the architecture, would be interested to learn more). IIUC it has a strategic model which is evaluating game positions and messages, and a separate (downstream) LLM which could be thought of as an epiphenomenon of the actual strategic reasoning. I don’t see any concrete proof that the strategic planner is modeling the impact of its communication on other players (ie “if I tell Bob X he might do Y which would lead to favorable strategic position P”). And the LM is more like a press secretary—all it gets is The Current Plan, not the plan B or any hint of the juicy backstab that may or may not have been evaluated. It seems to me there is a weird split brain here that deeply impairs its ability to model itself, and so I am skeptical that it is actually meaningfully doing so.
So to summarize I think there are open questions about the object level properties of Cicero, which could be answered by better interpretability tools.
It’s possible that Cicero is simply an honest but myopic opportunist. In which case, still not trustworthy. (As if we were ever going to trust such an obviously unsafe-by-construction entity with real power).
It’s possible that the strategic model is deceptive, and planning backstabs, planning to mislead others, and maybe even explicitly modeling its press secretary to provide the most-spinable plans. I doubt this as it requires a lot more complexity and world modeling.
But I believe we could answer where Cicero is on the spectrum between these two by actually observing the world models being generated when it evaluates a turn. So in this case, inasmuch as there is something similar to an inner alignment issue, it is fully detectable with adequate (strong) interpretability of the inner workings. (Again though, Cicero’s value function is so comically unviable as a basis for an AGI that I struggle to infer too much about inner alignnment. We should be wary of these problems when building a Skynet military planner, I suppose, or any agent that needs to model extremely adversarial opponents and outcomes.)