Intuitions about goal-directed behavior
One broad argument for AI risk is the Misspecified Goal argument:
The Misspecified Goal Argument for AI Risk: Very intelligent AI systems will be able to make long-term plans in order to achieve their goals, and if their goals are even slightly misspecified then the AI system will become adversarial and work against us.
My main goal in this post is to make conceptual clarifications and suggest how they affect the Misspecified Goal argument, without making any recommendations about what we should actually do. Future posts will argue more directly for a particular position. As a result, I will not be considering other arguments for focusing on AI risk even though I find some of them more compelling.
I think of this as a concern about long-term goal-directed behavior. Unfortunately, it’s not clear how to categorize behavior as goal-directed vs. not. Intuitively, any agent that searches over actions and chooses the one that best achieves some measure of “goodness” is goal-directed (though there are exceptions, such as the agent that selects actions that begin with the letter “A”). (ETA: I also think that agents that show goal-directed behavior because they are looking at some other agent are not goal-directed themselves—see this comment.) However, this is not a necessary condition: many humans are goal-directed, but there is no goal baked into the brain that they are using to choose actions.
This is related to the concept of optimization, though with intuitions around optimization we typically assume that we know the agent’s preference ordering, which I don’t want to assume here. (In fact, I don’t want to assume that the agent even has a preference ordering.)
One potential formalization is to say that goal-directed behavior is any behavior that can be modelled as maximizing expected utility for some utility function; in the next post I will argue that this does not properly capture the behaviors we are worried about. In this post I’ll give some intuitions about what “goal-directed behavior” means, and how these intuitions relate to the Misspecified Goal argument.
Generalization to novel circumstances
Consider two possible agents for playing some game, let’s say TicTacToe. The first agent looks at the state and the rules of the game, and uses the minimax algorithm to find the optimal move to play. The second agent has a giant lookup table that tells it what move to play given any state. Intuitively, the first one is more “agentic” or “goal-driven”, while the second one is not. But both of these agents play the game in exactly the same way!
The difference is in how the two agents generalize to new situations. Let’s suppose that we suddenly change the rules of TicTacToe—perhaps now the win condition is reversed, so that anyone who gets three in a row loses. The minimax agent is still going to be optimal at this game, whereas the lookup-table agent will lose against any opponent with half a brain. The minimax agent looks like it is “trying to win”, while the lookup-table agent does not. (You could say that the lookup-table agent is “trying to take actions according to <policy>”, but this is a weird complicated goal so maybe it doesn’t count.)
In general, when we say that an agent is pursuing some goal, this is meant to allow us to predict how the agent will generalize to some novel circumstance. This sort of reasoning is critical for the Goal-Directed argument for AI risk. For example, we worry that an AI agent will prevent us from turning it off, because that would prevent it from achieving its goal: “You can’t fetch the coffee if you’re dead.” This is a prediction about what an AI agent would do in the novel circumstance where a human is trying to turn the agent off.
This suggests a way to characterize these sorts of goal-directed agents: there is some goal such that the agent’s behavior in new circumstances can be predicted by figuring out which behavior best achieves the goal. There’s a lot of complexity in the space of goals we consider: something like “human well-being” should count, but “the particular policy <x>” and “pick actions that start with the letter A” should not. When I use the word goal I mean to include only the first kind, even though I currently don’t know theoretically how to distinguish between the various cases.
Note that this is in stark contrast to existing AI systems, which are particularly bad at generalizing to new situations.
Honestly, I’m surprised it’s only 90%. [1]
Empowerment
We could also look at whether or not the agent acquires more power and resources. It seems likely that an agent that is optimizing for some goal over the long term would want more power and resources in order to more easily achieve that goal. In addition, the agent would probably try to improve its own algorithms in order to become more intelligent.
This feels like a consequence of goal-directed behavior, and not its defining characteristic, because it is about being able to achieve a wide variety of goals, instead of a particular one. Nonetheless, it seems crucial to the broad argument for AI risk presented above, since an AI system will probably need to first accumulate power, resources, intelligence, etc. in order to cause catastrophic outcomes.
I find this concept most useful when thinking about the problem of inner optimizers, where in the course of optimization through a rich space you stumble across a member of the space that is itself doing optimization, but for a related but still misspecified metric. Since the inner optimizer is being “controlled” by the outer optimization process, it is probably not going to cause major harm unless it is able to “take over” the outer optimization process, which sounds a lot like accumulating power. (This discussion is extremely imprecise and vague; see Risks from Learned Optimization for a more thorough discussion.)
Our understanding of the behavior
There is a general pattern in which as soon as we understand something, it becomes something lesser. As soon as we understand rainbows, they are relegated to the “dull catalogue of common things”. This suggests a somewhat cynical explanation of our concept of “intelligence”: an agent is considered intelligent if we do not know how to achieve the outcomes it does using the resources that it has (in which case our best model for that agent may be that it is pursuing some goal, reflecting our tendency to anthropomorphize). That is, our evaluation about intelligence is a statement about our epistemic state. Some examples that follow this pattern are:
As soon as we understand how some AI technique solves a challenging problem, it is no longer considered AI. Before we’ve solved the problem, we imagine that we need some sort of “intelligence” that is pointed towards the goal and solves it: the only method we have of predicting what this AI system will do is to think about what a system that tries to achieve the goal would do. Once we understand how the AI technique works, we have more insight into what it is doing and can make more detailed predictions about where it will work well, where it tends to make mistakes, etc. and so it no longer seems like “intelligence”. Once you know that OpenAI Five is trained by self-play, you can predict that they haven’t seen certain behaviors like standing still to turn invisible, and probably won’t work well there.
Before we understood the idea of natural selection and evolution, we would look at the complexity of nature and ascribe it to intelligent design; once we had the mathematics (and even just the qualitative insight), we could make much more detailed predictions, and nature no longer seemed like it required intelligence. For example, we can predict the timescales on which we can expect evolutionary changes, which we couldn’t do if we just modeled evolution as optimizing reproductive fitness.
Many phenomena (eg. rain, wind) that we now have scientific explanations for were previously explained to be the result of some anthropomorphic deity.
When someone performs a feat of mental math, or can tell you instantly what day of the week a random date falls on, you might be impressed and think them very intelligent. But if they explain to you how they did it, you may find it much less impressive. (Though of course these feats are selected to seem more impressive than they are.)
Note that an alternative hypothesis is that humans equate intelligence with mystery; as we learn more and remove mystery around eg. evolution, we automatically think of it as less intelligent.
To the extent that the Misspecified Goal argument relies on this intuition, the argument feels a lot weaker to me. If the Misspecified Goal argument rested entirely upon this intuition, then it would be asserting that because we are ignorant about what an intelligent agent would do, we should assume that it is optimizing a goal, which means that it is going to accumulate power and resources and lead to catastrophe. In other words, it is arguing that assuming that an agent is intelligent definitionally means that it will accumulate power and resources. This seems clearly wrong; it is possible in principle to have an intelligent agent that nonetheless does not accumulate power and resources.
Also, the argument is not saying that in practice most intelligent agents accumulate power and resources. It says that we have no better model to go off of other than “goal-directed”, and then pushes this model to extreme scenarios where we should have a lot more uncertainty.
To be clear, I do not think that anyone would endorse the argument as stated. I am suggesting as a possibility that the Misspecified Goal argument relies on us incorrectly equating superintelligence with “pursuing a goal” because we use “pursuing a goal” as a default model for anything that can do interesting things, even if that is not the best model to be using.
Summary
Intuitively, goal-directed behavior can lead to catastrophic outcomes with a sufficiently intelligent agent, because the optimal behavior for even a slightly misspecified goal can be very bad according to the true goal. However, it’s not clear exactly what we mean by goal-directed behavior. Often, an algorithm that searches over possible actions and chooses the one with the highest “goodness” will be goal-directed, but this is neither necessary nor sufficient.
“From the outside”, it seems like a goal-directed agent is characterized by the fact that we can predict the agent’s behavior in new situations by assuming that it is pursuing some goal, and as a result it is acquires power and resources. This can be interpreted either as a statement about our epistemic state (we know so little about the agent that our best model is that it pursues a goal, even though this model is not very accurate or precise) or as a statement about the agent (predicting the behavior of the agent in new situations based on pursuit of a goal actually has very high precision and accuracy). These two views have very different implications on the validity of the Misspecified Goal argument for AI risk.
[1] This is an entirely made-up number.
- Disentangling arguments for the importance of AI safety by 21 Jan 2019 12:41 UTC; 133 points) (
- Coherence arguments do not entail goal-directed behavior by 3 Dec 2018 3:26 UTC; 133 points) (
- AI Alignment 2018-19 Review by 28 Jan 2020 2:19 UTC; 126 points) (
- Coherence arguments imply a force for goal-directed behavior by 26 Mar 2021 16:10 UTC; 91 points) (
- Literature Review on Goal-Directedness by 18 Jan 2021 11:15 UTC; 80 points) (
- Clarifying some key hypotheses in AI alignment by 15 Aug 2019 21:29 UTC; 79 points) (
- Disentangling arguments for the importance of AI safety by 23 Jan 2019 14:58 UTC; 63 points) (EA Forum;
- Conclusion to the sequence on value learning by 3 Feb 2019 21:05 UTC; 51 points) (
- Coherent behaviour in the real world is an incoherent concept by 11 Feb 2019 17:00 UTC; 51 points) (
- Two senses of “optimizer” by 21 Aug 2019 16:02 UTC; 35 points) (
- Human-AI Interaction by 15 Jan 2019 1:57 UTC; 34 points) (
- 13 Dec 2019 0:51 UTC; 33 points) 's comment on Coherence arguments do not entail goal-directed behavior by (
- [AN #103]: ARCHES: an agenda for existential safety, and combining natural language with deep RL by 10 Jun 2020 17:20 UTC; 29 points) (
- [AN #105]: The economic trajectory of humanity, and what we might mean by optimization by 24 Jun 2020 17:30 UTC; 24 points) (
- [AN #98]: Understanding neural net training by seeing which gradients were helpful by 6 May 2020 17:10 UTC; 22 points) (
- Goal-directed = Model-based RL? by 20 Feb 2020 19:13 UTC; 21 points) (
- 4 Key Assumptions in AI Safety by 7 Nov 2022 10:50 UTC; 20 points) (
- Coherence arguments imply a force for goal-directed behavior by 6 Apr 2021 21:44 UTC; 19 points) (EA Forum;
- Focus: you are allowed to be bad at accomplishing your goals by 3 Jun 2020 21:04 UTC; 19 points) (
- Against the Backward Approach to Goal-Directedness by 19 Jan 2021 18:46 UTC; 19 points) (
- 8 Dec 2019 5:50 UTC; 17 points) 's comment on Comment on Coherence arguments do not imply goal directed behavior by (
- What are we assuming about utility functions? by 2 Oct 2019 15:11 UTC; 17 points) (
- Framing approaches to alignment and the hard problem of AI cognition by 15 Dec 2021 19:06 UTC; 16 points) (
- Alignment Newsletter #35 by 4 Dec 2018 1:10 UTC; 15 points) (
- Goals and short descriptions by 2 Jul 2020 17:41 UTC; 14 points) (
- [AN #107]: The convergent instrumental subgoals of goal-directed agents by 16 Jul 2020 6:47 UTC; 13 points) (
- Goal-Directedness: What Success Looks Like by 16 Aug 2020 18:33 UTC; 9 points) (
- 13 Jan 2019 18:44 UTC; 8 points) 's comment on Comments on CAIS by (
- 15 Dec 2019 22:18 UTC; 7 points) 's comment on Is the term mesa optimizer too narrow? by (
- Goal-directedness is behavioral, not structural by 8 Jun 2020 23:05 UTC; 6 points) (
- 4 Key Assumptions in AI Safety by 7 Nov 2022 10:50 UTC; 5 points) (EA Forum;
- 5 Dec 2019 16:51 UTC; 5 points) 's comment on What are we assuming about utility functions? by (
- 1 Mar 2021 14:06 UTC; 5 points) 's comment on Behavioral Sufficient Statistics for Goal-Directedness by (
- 1 Mar 2021 14:48 UTC; 2 points) 's comment on Behavioral Sufficient Statistics for Goal-Directedness by (
Do you have a citation for this? Who are you arguing against, or whose argument are you trying to clarify?
I tend to have a different version of the Misspecified Goal argument in mind which I think doesn’t have this problem:
At least some humans are goal-directed at least some of the time.
It’s likely possible to create artificial agents that are better at achieving goals than humans.
It will be very tempting for some humans to build such agents in order to help those humans achieve their goals (either instrumental goals or their understanding of their terminal goals).
The most obvious way for other humans to compete with or defend against such goal-oriented artificial agents is to build their own goal-oriented artificial agents.
If at that point we do not know how to correctly specify the goals for such artificial agents (and we haven’t figured out how to stop / compete with / defend against such agents some other way), the universe will end up being directed towards the wrong goals, which may be catastrophic depending on various contingencies such as what the correct meta and normative ethics turn out to be, if the the “incorrect” goals we build into such agents are good enough to capture most of our scalable values, etc.
I briefly looked for and did not find a good citation for this.
I’m not sure. However, I have a lot of conversations where it seems to me that the other person believes the Misspecified Goal Argument. Currently, if I were to meet a MIRI employee I hadn’t met before, I would be unsure whether the Misspecified Goal Argument is their primary reason for worrying about AI risk. If I meet a rationalist who takes the MIRI perspective on AI risk but isn’t at MIRI themselves, by default I assume that their primary reason for caring about AI risk is the Misspecified Goal argument.
I do want to note that I am primarily trying to clarify here, I didn’t write this as an argument against the Misspecified Goal argument. In fact, conditional on the AI having goals, I do agree with the Misspecified Goal argument.
Yeah, I think this is a good argument, and I want to defer to my future post on the topic, which should come out on Wednesday. The TL;DR is that I agree with the argument but it implies a broader space of potential solutions than “figure out how to align a goal-directed AI”.
(Sorry that I didn’t adequately point to different arguments and what I think about them—I didn’t do this because it would make for a very long post, and it’s instead being split into several posts, and this particular argument happens to be in the post on Wednesday.)
My guess is that agents that are not primarily goal-directed can be good at defending against goal-directed agents (especially with first mover advantage, preventing goal-directed agents from gaining power), and are potentially more tractable for alignment purposes, if humans coexist with AGIs during their development and operation (rather than only exist as computational processes inside the AGI’s goal, a situation where a goal concept becomes necessary).
I think the assumption that useful agents must be goal-directed has misled a lot of discussion of AI risk in the past. Goal-directed agents are certainly a problem, but not necessarily the solution. They are probably good for fixing astronomical waste, but maybe not AI risk.
I think I disagree with this at least to some extent. Humans are not generally safe agents, and in order for not-primarily-goal-directed AIs to not exacerbate humans’ safety problems (for example by rapidly shifting their environments/inputs out of a range where they are known to be relatively safe), it seems that we have to solve many of the same metaethical/metaphilosophical problems that we’d need to solve to create a safe goal-directed agent. I guess in some sense the former has lower “AI risk” than the latter in that you can plausibly blame any bad outcomes on humans instead of AIs, but to me that’s actually a downside because it means that AI creators can more easily deny their responsibility to help solve those problems.
Learning how to design goal-directed agents seems like an almost inevitable milestone on the path to figuring out how to safely elicit human preference in an actionable form. But the steps involved in eliciting and enacting human preference don’t necessarily make use of a concept of preference or goal-directedness. An agent with a goal aligned with the world can’t derive its security from the abstraction of goal-directedness, because the world determines that goal, and so the goal is vulnerable to things in the world, including human error. Only self-contained artificial goals are safe from the world and may lead to safety of goal-directed behavior. A goal built from human uploads that won’t be updated from the world in the future gives safety from other things in the world, but not from errors of the uploads.
When the issue is figuring out which influences of the world to follow, it’s not clear that goal-directedness remains salient. If there is a goal, then there is also a world-in-the-goal and listening to your own goal is not safe! Instead, you have to figure out which influences in your own goal to follow. You are also yourself part of the world and so there is an agent-in-the-goal that can decide aspects of preference. This framing where a goal concept is prominent is not obviously superior to other designs that don’t pursue goals, and instead focus on pointing at the appropriate influences from the world. For example, a system may seek to make reliable uploads, or figure out which decisions of uploads are errors, or organize uploads to make sense of situations outside normal human environments, or be corrigible in a secure way, so as to follow directions of a sane external operator and not of an attacker. Once we have enough of such details figured out (none of which is a goal-directed agent), it becomes possible to take actions in the world. At that point, we have a system of many carefully improved kluges that further many purposes in much the same way as human brains do, and it’s not clearly an improvement to restructure that system around a concept of goals, because that won’t move it closer to the influences of the world it’s designed to follow.
This makes me think I probably misunderstood what you meant earlier by “agents that are not primarily goal-directed”. Do you have a reference that you can point me to that describes what you have in mind in more detail?
According to GAZP vs. GLUT with consciousness replaced by goal-directed behavior, we may want to say that goal-directed behavior is involved in the creation or even just specification of the giant lookup table TicTacToe agent.
Yeah, that’s right. There’s some sort of goal/specification/desire that picks out the particular behavior we want out of the large space of possible behaviors. However, that goal/specification/desire need not be internal to the AI system, and it need not be long-term.
(Side note: it’s not that giant)
This post did not convince me that the bolded part is not (almost) a solution
I would honestly be very surprised if [whatever definition of goal-directedness we end up with] will be such that approval-directed agents are not goal-directed. In what sense is “do what this other agent likes?” not a goal? It also seems like you could define lots of things that are kind of like approval-directedness (do what agent a likes, do what is good for a, do what a will reward me for). Are some of those goal-directed and others not?
Excluding approval-directedness feels like patching the natural formulation of goal-directedness: ‘It has to be a any kind of goal except doing what someone likes’, and even then there seem to be many goals that are smilar as ‘do what someone likes’ which would have to be patched as well. What’s wrong with saying approval-directedness is a special kind of goal-directedness?
I agree that the agent that selects actions that begin with ‘A’ is a counter-example, but this feels like it belongs to a specific class of counter-examples where it’s okay (or even necessary) to patch them. As in, you have these real things (possible actions), you have your representation of them, and now you define something in terms of the representation. (This feels similar to the agent that runs a search algorithm for a random objective, stops as soon as one of its registers is set to 56789ABC and then outputs the policy it’s currently considering.) A definition just has to say that the ‘goodness’ criterion is not allowed to depend on the representation (names of policies/parts of the code/etc.).
Is there a counter-example (something that runs a search but isn’t goal directed) that isn’t about the representation?
I assume an updated version of this would link to the Risks from Learned Optimization paper.
Yeah I think I broadly agree with this perspective (in contrast to how I used to think about it), but I still think there is a meaningful distinction in an AI agent that pursues goals that other agents (e.g. humans) have. I would no longer call it “not goal-directed”, but it still seems like an important distinction.
Yes, updated, thanks.
Even this seems wrong or confusing, since it would appear to include approval-directed agents under “goal-directed behavior” (since an approval-directed agent searches over possible actions and chooses the one with the highest expected approval, which is a kind of “goodness”).
Agreed, changed that sentence so that it no longer claims it is a sufficient condition. Thanks for catching that!
“I find this concept most useful when thinking about the problem of inner optimizers, where in the course of optimization through a rich space you stumble across a member of the space that is itself doing optimization, but for a related but still misspecified metric.”—Could you clarify what kind of algorithm you are imagining being run?
I could imagine this happening with standard deep RL over a long enough time horizon with enough compute. Again though, I want to defer to the upcoming sequence on the topic, which should have a good in-depth explanation.
My rough mental summary of these intuitions
Generalisation abilities suggests that behaviour is goal directed beucase it demonstrates adaptability (and goals are more adatable/compact ways of defining behaviour than others, like enumeration)
Powergrabs suggest that behaviour is goal directed beucase it reveals instrumentalism
Our understanding of intelligence migth be limited to human intelligence which is sometime goal directed so we use this a proxy of intelligence (adding some, perhaps unrefutable, skepticism of goal directness as a model of intelligence)