The Achilles Heel Hypothesis for AI
Pitfalls for AI Systems via Decision Theoretic Adversaries
This post accompanies a new paper related to AI alignment. A brief outline and informal discussion of the ideas is presented here, but of course, you should check out the paper for the full thing.
As progress in AI continues to advance at a rapid pace, it is important to know how advanced systems will make choices and in what ways they may fail. When thinking about the prospect of superintelligence, I think it’s all too easy and all too common to imagine that an ASI would be something which humans, by definition, can’t ever outsmart. But I don’t think we should take this for granted. Even if an AI system seems very intelligent—potentially even superintelligent—this doesn’t mean that it’s immune to making egregiously bad decisions when presented with adversarial situations. Thus the main insight of this paper:
The Achilles Heel hypothesis: Being a highly-successful goal-oriented agent does not imply a lack of decision theoretic weaknesses in adversarial situations. Highly intelligent systems can stably possess “Achilles Heels” which cause these vulnerabilities.
More precisely, I define an Achilles Heel as a delusion which is impairing (results in irrational choices in adversarial situations), subtle (doesn’t result in irrational choices in normal situations), implantable (able to be introduced) and stable (remaining in a system reliably over time).
In the paper, a total of 8 prime candidates Achilles Heels are considered alongside ways by which they could be exploited and implanted:
Corrigibility
Evidential decision theory
Causal decision theory
Updateful decision theory
Simulational belief
Sleeping beauty assumptions
Infinite temporal models
Aversion to the use of subjective priors
This was all inspired by thinking about how, since paradoxes can often stump humans, they might also fool certain AI systems in ways that we should anticipate. It surveys and augments work in decision theory involving dilemmas and paradoxes in context of this hypothesis and makes a handful of novel contributions involving implantation. My hope is that this will lead to insights on how to better model and build advanced AI. On one hand, Achilles Heels are a possible failure mode which we want to avoid, but on the other, they are an opportunity for building better models via adversarial training or the use of certain Achilles Heels for containment. This paper may also just be a useful reference in general for the topics it surveys.
For more info, you’ll have to read it! Also feel free to contact me at scasper@college.harvard.edu.
Has anyone else noticed this paper is much clearer on definitions and much more readable than the vast majority of AI safety literature, much of what it draws on? Like it has a lot of definitions that could be put in an “encyclopedia for friendly AI” so to speak.
Some extra questions:
How much time/effort did it take for you to write this all? What was the hardest part of this?
Do most systems today unintentionally have corrigibility simply b/c they are not complex enough to represent “being turned off” as a strong negative in its reward function?
Are Newcombian problems rarely found in the real world, but much more likely to be found in the AI world (esp b/c the AI has a modeler that should model what they would do?)
It’s really nice to hear that the paper seems clear! Thanks for the comment.
I’ve been working on this since March, but at a very slow pace, and I took a few hiatuses. most days when I’d work on it, it was for less than an hour. After coming up with the initial framework to tie things together, the hardest part was trying and failing to think of interesting ways in which most of the achilles heels presented could be used as novel containment measures. I discuss this a bit in the discussion section.
For 2-3, I can give some thoughts, but these aren’t necessarily through through much more than many other people one could ask.
I would agree with this. From an agent to even have a notion of being turned off, it would need some sort or model that accounts for this but which isn’t learned via experience in a typical episodic learning setting (clearly because you can’t learn after you’re dead). This would all require a world model which would be more sophisticated than any sort of model-based RL techniques of which I know would be capable of by default.
I also would agree. The most straightforward way for these problems to emerge is if a predictor has access to source code. Though sometimes they can occur if the predictor has access to some other means of prediction which cannot be confounded by the choice of what source code the agent runs. I write a little about this in this post. https://www.lesswrong.com/posts/xoQRz8tBvsznMXTkt/dissolving-confusion-around-functional-decision-theory
I have thought of a similar idea: “philosophical landmines” (PL) to stop unfriendly AI. PL are tasks which a simple in formulation but could halt an AI as they require infinite amount of computation to solve. Examples include Buridan ass problem, the problem if AI is real or just possible, the problem of being in simulation or not, other anthropic riddles and pascal mugging-like stuff.
Best such problems should be not published as they could be used as our last defence against UFAI.
I think that AI capable of being nerd-sniped by these landmines will probably be nerd-sniped by them (or other ones we haven’t thought of) on its own without our help. The kind of AI that I find more worrying (and more plausible) is the kind that isn’t significantly impeded by these landmines.
Yes, landmines is the last level of defence, which have very low probability to work (like 0.1 per cent). However, If AI is stable to all possible philosophical landmines, it is a very stable agent and has higher chances to keep its alignment and do not fail catastrophically.
Thanks for the comment. +1 to it. I also agree that this is an interesting concept: using Achilles Heels as containment measures. There is a discussion related to this on page 15 of the paper. In short, I think that this is possible and useful for some achilles heels and would be a cumbersome containment measure for others which could be accomplished more simply via bribes of reward.