Would catching your AIs trying to escape convince AI developers to slow down or undeploy?

I often talk to people who think that if frontier models were egregiously misaligned and powerful enough to pose an existential threat, you could get AI developers to slow down or undeploy models by producing evidence of their misalignment. I’m not so sure. As an extreme thought experiment, I’ll argue this could be hard even if you caught your AI red-handed trying to escape.

Imagine you’re running an AI lab at the point where your AIs are able to automate almost all intellectual labor; the AIs are now mostly being deployed internally to do AI R&D. (If you want a concrete picture here, I’m imagining that there are 10 million parallel instances, running at 10x human speed, working 247. See e.g. similar calculations here). And suppose (as I think is 35% likely) that these models are egregiously misaligned and are actively looking for ways to subvert your safety measures and escape your control.

You caught your AI attempting to backdoor some code that controls access to a security-critical service. You investigated this by running other instances on the inputs they’d see if that backdoored code had actually been deployed; the other AI instances never point out the security flaw, and they often try to exploit that code to escalate their privileges. (See here for more on this.)

At this point you’re almost sure the AI is scheming. So you try to call the US President and they hear you out, and then they go off and call one of your competitors to ask for their take. They say something like this:

  • It would be extremely costly to shut down all the AI projects. China is racing to catch up and won’t stop for this. If you force us to stop deploying our frontier models, they’ll outcompete us within a year, and you won’t be able to get China to pause without substantial risk of war.

  • AIs are well known to do weird stuff. It would be irresponsible to assume one instance of anomalous behavior meant AIs were systematically going to act that way. I can put you in touch with top AI experts and they’ll back me up.

  • Even if the AI is indeed doing something systematically funny, we have no evidence that it has the ambition to seize huge amounts of power, and it’s not clear that it would be able to. It seems implausible that AIs would do that; there’s been no concrete evidence that the AIs are in fact power-hungry.

  • Maybe our competitor just messed up their alignment, which would make sense because, as we’ve always told you, they’re not very competent (especially compared to us). If they want to shut down, they’re welcome to. But it doesn’t make sense to penalize us for their shoddy work.

  • Maybe they’re just lying; this could be a doomer scheme to shut down AI, which the doomers have wanted to do for years. The logs they sent you could be complete forgeries. Or for all we know, someone there (with or without leadership approval) intentionally backdoored their RL data to make their model do this. It would be a mistake to take rash action before independent experts confirm that this wasn’t somehow rigged.

I’m sympathetic to all of these arguments. The main reason I’d be more freaked out is that I already think egregious misalignment is fairly plausible; if I thought it was very unlikely, I wouldn’t change my mind based on one weird observation.

(I think it’s pretty plausible that news of the escape attempt wouldn’t even make it out of the AI lab: all the above arguments could happen inside the AI lab, between the safety concerned people and people who wanted to push forward.)

Now, I’m not sure I’m right to be so pessimistic here. But I think a lot of misalignment-concerned people take for granted that we’d be okay in this situation, and that seems way too optimistic to me. In particular I think they underrate these arguments:

  • The main reason I’m worried about egregious misalignment is theoretical arguments that AGI might be egregiously misaligned and power seeking in a way that isn’t resolved by default in the course of making it useful. If someone is very skeptical of these arguments today, I’m not sure that even an escape attempt would be enough evidence to end up more than 50% on the model being a schemer, especially given the secrecy and time-pressure.

  • Undeploying is plausibly going to be extremely expensive and risky, and key decisionmakers might be under extreme pressure when they have to make the call. So you’ll have to convince people that AI takeover is not just a serious threat, but that it’s a risk so large that it’s worth trading off our response to other risks that might be extremely pressing and salient.

I’ve gotten more cynical from following the discourse about SB1047. If we can’t establish expert consensus on matters as easy-to-check as “what does the text of this bill say”, I feel pessimistic about rapidly establishing expert consensus on much more confusing questions like these.

So I’m not feeling overwhelmingly optimistic about our odds of restricting AI development even given evidence of risk that seems about as good as we can hope for. People often propose that we’ll be able to slow down with weaker evidence for danger from misalignment than this (e.g. model organisms, or unspecified arguments via interpretability), or even that we’ll be able to require an affirmative case for safety. I think that persuading people with weaker evidence will be harder than what I described here (though these earlier efforts at persuasion have the benefit that they happen earlier, when the relevant actors are less rushed and scared).

What do I take away from this?

  • Even if you think your lab would want to undeploy models if they tried to escape, you should be prepared for the possibility that other AI developers won’t. So you need to believe that your lab has a good plan for that eventuality—either the willingness to drop out of the race, or a plan for mitigating risks while deploying known-scheming models.

  • This is part of why I think it seems important to have plans for safely deploying models that you think are plausibly scheming, which is why I research AI control.

  • It might be worth having a plan in place for how you’ll persuade people to freak out if you actually run into this evidence, rather than just taking for granted that you’d succeed.

  • And thinking this through has made me think it’s more useful to try to sell people on the arguments we have now for why AIs might be egregiously misaligned—even though in the future it will be way easier to argue “AI is very dangerous”, it might not get vastly easier to argue “egregious misalignment is plausible”, even if it is.