’Any ways to combine all the previously mentioned ideas (A Causal Explainable Guardrails for Large Language Models, Discussion: Challenges with Unsupervised LLM Knowledge Discovery and A Multimodal Automated Interpretability Agent (MAIA), ‘Catching AIs red-handed’, including paulfchristiano’s comment) with ideas from ‘Searching for a model’s concepts by their shape – a theoretical framework’ to ‘catch AIs red-handed’?
Here are a few additional ideas for combining all the mentioned papers and comments to try catching deceptive models red-handed:
Using the “deliberate adversarial experiments” mentioned in paulfchristiano’s comment, like varying the model’s beliefs about how closely it is being monitored, to surface cases where the model’s behavior is suspiciously sensitive to being watched. The theoretical framework from “Searching for concepts by their shape” could be used to formalize the hypothesis that the model has a hidden “deception mode” that gets triggered when it believes oversight is lax. MAIA’s automated experimentation tools could be used to efficiently search for inputs exhibiting this sensitivity.
Comparing the features extracted by the interpretability tools from “Searching for concepts by their shape” across different copies of the model that have been adversarially perturbed to have different beliefs (per paulfchristiano’s comment). If there are decision-relevant features (e.g. representations of the model’s objectives) that are suspiciously different in the copy that believes it is unobserved, that would be strong evidence of deceptive cognition. The causal analysis from “Causal Explainable Guardrails” could help determine whether those differing features are causally linked to the differing outputs.
Training MAIA’s detectors for suspicious model outputs not just on straightforwardly deceptive behaviors, but on subtler signs of agency-like cognition hypothesized in paulfchristiano’s comment, e.g. the model giving worse answers when it believes it is unobserved. This could yield more robust detection that is harder for the model to circumvent by avoiding overt deception.
If interpretability tools manage to extract a model’s representations of its beliefs, objectives, plans, etc. (a big if!), use the techniques from “Catching AIs red-handed” to examine how those representations differ on the inputs that were “successfully” deceptive (i.e. where the model behaved deceptively but was not caught). There may be learnable patterns in how the model’s cognition differs in those cases which could inspire hypotheses about what relations characterize “successful” deception. Those hypotheses could then be formalized and tested using the framework from “Searching for concepts by their shape”.
A point emphasized in paulfchristiano’s comment is the importance of extreme thoroughness and breadth in searching for deceptive cognition, e.g. (to paraphrase) “even if a model exhibits some deceptive behavior, don’t assume you’ve found all of its tricks; keep probing its behavior in a huge variety of contexts”. MAIA and other automated testing tools should be put in service of this kind of breadth. Rather than just focus on the most egregious cases of deception, we should test the model’s behavior on a vast range of subtle perturbations hunting for any whiff of situational inconsistency. The theoretical frameworks for agency from “Searching for concepts by their shape” can guide us in knowing what to look for.
Extending my previous thoughts, I think one of the most useful things the “Searching for concepts by their shape” framing can contribute is mathematical precision in defining hypotheses about what characterizes deceptive cognition. If we can frame agency/deception in terms of precise relations between features (even if those features are hard to interpretably extract in practice), it gives us a crisper target to aim at with our testing, rather than just relying on intuitive notions of what “seems deceptive”.
However, I continue to be quite pessimistic about the prospects for catching competent deceptive models before it’s too late, for the reasons outlined previously. I worry that by the time a model is smart enough that we could meaningfully apply these combined interpretability strategies to it, it would also be smart enough to notice what we’re doing and come up with countermeasures.
Unless interpretability tools far exceed my expectations in their ability to crack open the black box of neural nets (which would be great!), I suspect the most practical path is to use simpler, more behavioral detection strategies (like the model consistency testing from your Control paper) to avoid building models that are competent enough to be robustly deceptive in the first place. Catching a superintelligent deceptive model after the fact seems like a much harder challenge than noticing warning signs of misalignment in weaker models and heading them off early.
But I would still encourage pursuing all these ideas, even if they end up not being enough on their own to solve the problem. The interpretability insights gained would be valuable for alignment more broadly. And we may get lucky in extracting some key structural signature of deceptive cognition that even very sophisticated models can’t fully mask. It’s an arms race we can’t afford to unilaterally forfeit.′
’Any ways to combine all the previously mentioned ideas (A Causal Explainable Guardrails for Large Language Models, Discussion: Challenges with Unsupervised LLM Knowledge Discovery and A Multimodal Automated Interpretability Agent (MAIA), ‘Catching AIs red-handed’, including paulfchristiano’s comment) with ideas from ‘Searching for a model’s concepts by their shape – a theoretical framework’ to ‘catch AIs red-handed’?
Here are a few additional ideas for combining all the mentioned papers and comments to try catching deceptive models red-handed:
Using the “deliberate adversarial experiments” mentioned in paulfchristiano’s comment, like varying the model’s beliefs about how closely it is being monitored, to surface cases where the model’s behavior is suspiciously sensitive to being watched. The theoretical framework from “Searching for concepts by their shape” could be used to formalize the hypothesis that the model has a hidden “deception mode” that gets triggered when it believes oversight is lax. MAIA’s automated experimentation tools could be used to efficiently search for inputs exhibiting this sensitivity.
Comparing the features extracted by the interpretability tools from “Searching for concepts by their shape” across different copies of the model that have been adversarially perturbed to have different beliefs (per paulfchristiano’s comment). If there are decision-relevant features (e.g. representations of the model’s objectives) that are suspiciously different in the copy that believes it is unobserved, that would be strong evidence of deceptive cognition. The causal analysis from “Causal Explainable Guardrails” could help determine whether those differing features are causally linked to the differing outputs.
Training MAIA’s detectors for suspicious model outputs not just on straightforwardly deceptive behaviors, but on subtler signs of agency-like cognition hypothesized in paulfchristiano’s comment, e.g. the model giving worse answers when it believes it is unobserved. This could yield more robust detection that is harder for the model to circumvent by avoiding overt deception.
If interpretability tools manage to extract a model’s representations of its beliefs, objectives, plans, etc. (a big if!), use the techniques from “Catching AIs red-handed” to examine how those representations differ on the inputs that were “successfully” deceptive (i.e. where the model behaved deceptively but was not caught). There may be learnable patterns in how the model’s cognition differs in those cases which could inspire hypotheses about what relations characterize “successful” deception. Those hypotheses could then be formalized and tested using the framework from “Searching for concepts by their shape”.
A point emphasized in paulfchristiano’s comment is the importance of extreme thoroughness and breadth in searching for deceptive cognition, e.g. (to paraphrase) “even if a model exhibits some deceptive behavior, don’t assume you’ve found all of its tricks; keep probing its behavior in a huge variety of contexts”. MAIA and other automated testing tools should be put in service of this kind of breadth. Rather than just focus on the most egregious cases of deception, we should test the model’s behavior on a vast range of subtle perturbations hunting for any whiff of situational inconsistency. The theoretical frameworks for agency from “Searching for concepts by their shape” can guide us in knowing what to look for.
Extending my previous thoughts, I think one of the most useful things the “Searching for concepts by their shape” framing can contribute is mathematical precision in defining hypotheses about what characterizes deceptive cognition. If we can frame agency/deception in terms of precise relations between features (even if those features are hard to interpretably extract in practice), it gives us a crisper target to aim at with our testing, rather than just relying on intuitive notions of what “seems deceptive”.
However, I continue to be quite pessimistic about the prospects for catching competent deceptive models before it’s too late, for the reasons outlined previously. I worry that by the time a model is smart enough that we could meaningfully apply these combined interpretability strategies to it, it would also be smart enough to notice what we’re doing and come up with countermeasures.
Unless interpretability tools far exceed my expectations in their ability to crack open the black box of neural nets (which would be great!), I suspect the most practical path is to use simpler, more behavioral detection strategies (like the model consistency testing from your Control paper) to avoid building models that are competent enough to be robustly deceptive in the first place. Catching a superintelligent deceptive model after the fact seems like a much harder challenge than noticing warning signs of misalignment in weaker models and heading them off early.
But I would still encourage pursuing all these ideas, even if they end up not being enough on their own to solve the problem. The interpretability insights gained would be valuable for alignment more broadly. And we may get lucky in extracting some key structural signature of deceptive cognition that even very sophisticated models can’t fully mask. It’s an arms race we can’t afford to unilaterally forfeit.′