Instruction-following AGI is easier and more likely than value aligned AGI

Summary:

We think a lot about aligning AGI with human values. I think it’s more likely that we’ll try to make the first AGIs do something else. This might intuitively be described as trying to make instruction-following (IF) or do-what-I-mean-and-check (DWIMAC) be the central goal of the AGI we design. Adopting this goal target seems to improve the odds of success of any technical alignment approach. This goal target avoids the hard problem of specifying human values in an adequately precise and stable way, and substantially helps with goal misspecification and deception by allowing one to treat the AGI as a collaborator in keeping it aligned as it becomes smarter and takes on more complex tasks.

This is similar but distinct from the goal targets of prosaic alignment efforts. Instruction-following is a single goal target that is more likely to be reflexively stable in a full AGI with explicit goals and self-directed learning. It is counterintuitive and concerning to imagine superintelligent AGI that “wants” only to follow the instructions of a human; but on analysis, this approach seems both more appealing and more workable than the alternative of creating sovereign AGI with human values.

Instruction-following AGI could actually work, particularly in the short term. And it seems likely to be tried, even if it won’t work. So it probably deserves more thought.

Overview/​Intuition

How to use instruction-following AGI as a collaborator in alignment

  • Instruct the AGI to tell you the truth

    • Investigate its understanding of itself and “the truth”;

    • use interpretability methods

  • Instruct it to check before doing anything consequential

    • Instruct it to us a variety of internal reviews to predict consequences

  • Ask it a bunch of questions about how it would interpret various commands

  • Repeat all of the above as it gets smarter

  • frequently ask it for advice and about how its alignment could go wrong

Now, this won’t work if the AGI won’t even try to fulfill your wishes. In that case you totally screwed up your technical alignment approach. But if it will even sort of do what you want, and it at least sort of understands what you mean by “tell the truth”, you’re in business. You can leverage partial alignment into full alignment—if you’re careful enough, and the AGI gets smarter slowly enough.

It’s looking like the critical risk period is probably going to involve AGI on a relatively slow takeoff toward superintelligence. Being able to ask questions and give instructions, and even retrain or re-engineer the system, is much more useful if you’re guiding the AGI’s creation and development, not just “making wishes” as we’ve thought about AGI goals in fast takeoff scenarios.

Instruction-following is safer than value alignment in a slow takeoff

Instruction-following with verification or DWIMAC seems both intuitively and analytically appealing compared to more commonly discussed[1] alignment targets.[2] This is my pitch for why it should be discussed more. It doesn’t require solving ethics to safely launch AGI, and it includes most of the advantages of corrigibility,[3] including stopping on command. Thus, it substantially mitigates (although doesn’t outright solve) some central difficulties of alignment: goal misspecification (including not knowing what values to give it as goals) and alignment stability over reflection and continuous learning.

This approach it makes one major difficulty worse: humans remaining in control, including power struggles and other foolishness. I think the most likely scenario is that we succeed at technical alignment but fail at societal alignment. But I think there is a path to a vibrant future if we limit AGI proliferation to one or a few without major mistakes. I have difficulty judging how likely that is, but the odds will improve if semi-wise humans keep getting input from their increasingly wise AGIs.

More on each of these in the “difficulties” section below.

In working through the details of the scheme, I’m thinking primarily about aligning AGI based on language-capable foundation models, with scaffolding to provide other cognitive functions like episodic memory, executive function, and both human-like and nonhuman sensory and action capabilities. I think that such language model cognitive architectures (LMCAs) are the most likely path to AGI (and curiously, the easiest for technical alignment). But this alignment target applies to other types of AGI and other technical alignment plans as well. For instance, Steve Byrnes’ plan for mediocre alignment could be used to create mediocre alignment toward instruction-following in RL-based AGI, and the techniques here could leverage that mediocre alignment into more complete alignment.

Relation to existing alignment approaches

This alignment (or goal)[2] target is similar to but importantly distinct from inverse reinforcement learning and other value learning approaches. Instead of learning what you want and doing that, a DWIMAC or IF agent wants to do what you say. It doesn’t learn what you want, it just learns what you tend to mean by what you say. While you might use reinforcement learning to make it “want” to do what you say, I don’t think you need to, or should. So this approach isn’t teaching it your values. The AGI learns what people tend to mean by predictive or other learning methods. Making it “want” to do what it understood the human to mean is a matter of engineering its steering subsystem to follow that goal.

This is a subset of corrigibility in the broader Christiano sense.[4] But instruction-following is distinct from the (ill-defined) alignment targets of most prosaic alignment work. A DWIMAC agent doesn’t actually want to be helpful, because we don’t want to leave “helpful” up to its interpretation. The principal (human in charge) may have given it background instructions to try to be helpful in carefully defined ways and contexts, but the proposal is that the AGI’s first and only motivation be continuing to take and follow commands from its principal(s).

Max Harms has been working on this comparison, and the strengths of full Christiano corrigibility as an alignment target; we can hope to see his more thorough analysis published in the near future. I’m not personally sure which approach is ultimately better, because neither has received much discussion and debate. It’s possible that these two alignment targets are nearly identical once you’ve given wisely thought out background instructions to your AGI.

Instruction-following as an AGI alignment target is distinct from most discussions of “prosaic alignment”. Those seem largely directed at creating safe tool AI, without directly attacking the question of whether those techniques will generalize to agentic, self-reflexive AGI systems. If we produced a “perfectly aligned” foundation model, we still might not like the agent it becomes once it’s turned into a reflective, contextually aware entity. We might get lucky and have its goals after reflection and continued learning be something we can live with, like “diverse inclusive sustainable chillaxing”, but this seems like quite a shot in the dark. Even a perfect reproduction of modern-day human morality probably doesn’t produce a future we want; for instance, insects or certain AGI probably dominate a purely utilitarian calculus.

This type of alignment is counterintuitive since no human has a central goal of doing what someone else says. It seems logically consistent and practically achievable. It makes the AGI and its human overseers close collaborators in making plans, setting goals, and updating the AGI’s understanding of the world. This creates a “broad basin of attraction” for alignment, in which approximate initial alignment will improve over time. This property seems to apply to Christiano’s corrigibility and for value learning, but the source is somewhat different. The agent probably does “want” to get better at doing what I say as a side effect of wanting to do what I say. This would be helpful in some ways, but potentially dangerous if maximized to an extreme; more on that below. But the principal source of the “broad basin” here is the collaboration between human and AGI. The human can “steer the rocket”’ and adjust the agent’s alignment as it goes off course, or when they learn that the course wasn’t right in the first place.

In the remainder I briefly explain the idea, why I think it’s novel or at least under-analyzed, some problems it addresses, and new problems it introduces.

DWIMAC as goal target—more precise definition

I recently tried to do a deep dive on the reasons for disagreement about alignment difficulty. I thought both sides made excellent points. The relative success of RLHF and other prosaic alignment techniques is encouraging. But it does not mean that aligning a full AGI will be easy. Strong optimization makes goal misspecification more likely, and continuous learning introduces an alignment stability problem as the system’s understanding of its goals changes over time.

And we will very likely make full AGI (that is, goal-directed, self-aware and self-reflective, and with self-directed continuous learning), rather than stopping with useful tool AI. Agentic AI has cognitive advantages in learning and performance and in problem solving and concept discovery over the tool AI it is built from. In addition, developing a self-aware systems is fascinating and prestigious. For all of these reasons, a tool smart enough to wield itself will immediately be told to; and scaffolding in missing pieces will likely allow tools to achieve AGI even before that by combining tools into a synergistic cognitive architecture.

So we need better alignment techniques to address true AGI. After reading the pessimistic arguments closely, I think there’s a path around some of them. That’s by making full AGI that’s only semi-autonomous, to include a human-in-the-loop component as a core part of their motivational system. This allows weak alignment to be used to develop stronger alignment as systems change and become smarter, by allowing humans to monitor and guide the system’s development. This sounds like a non-starter if we think of superintelligences that can think millions of times faster than humans. But assuming a relatively slow takeoff, this type of collaborative supervision can extend for a significant time, with increasingly high-level oversight as the AGI’s intelligence increases.

Intuitively, we want AGIs whose goal is to do what its human(s) told and will tell it to do. This is importantly different than guessing what humans really want in any deep sense, and different than obsessively trying to fulfill an interpretation of the last instruction they gave. Both of those would be very poor instruction-following from a human helper, for the same reasons. This type of goal is more complex than the temporally static goals we usually think of; both paperclips and human flourishing can be maximized. Doing what someone would tell you is an unpredictable, changing goal from the perspective of even modestly superintelligent systems, because your future commands depend in complex ways on how the world changes in the meantime.

Intuition: a good employee follows instructions as they were intended

A good employee is usually attempting to do what I mean and check. Imagine a perfect employee, who wants to do what their boss tells them to do. If asked to prepare the TPS reports for the first time, this employee will echo back which reports they’ll prepare, where they’ll get the information, and when they’ll have the task finished, just to make sure they’re doing what the boss wants. If this employee is tasked with increasing the sales of the X model, they will not come up with a strategy that cannibalizes sales of the Y model, because they recognize that their boss might not want that.

Even if they are quite certain that their boss deep in their heart really wants a vacation, they will not arrange to have their responsibilities covered for the next month without asking first. They realize that their boss will probably dislike having that decision made for them, even if it does fulfill a deep desire. If told to create a European division of the company, this employee will not make elaborate plans and commitments, even if they’re sure they’ll work well, because they know their boss wants to be consulted on possible plans, since each plan will have different peripheral effects, and thus open and close different opportunities for the future.

This is the ideal of an instruction following AGI: like a good employee[5], it will not just guess what the boss meant and then carry out an elaborate plan, because it has an accurate estimate of the uncertainty in what was meant by that instruction (e.g., you said you needed some rest so I canceled all of our appointments for today). And they will not carry out plans that severely limit their ability to follow new instructions in the future (e.g., spending the whole budget on starting that European division without consulting the boss on the plan; let alone turning off their phone so the boss can’t disrupt their planning by giving new instructions).

An instruction-following AGI must have the goal of doing what its human(s) would tell it to do right now, what it’s been told in the past, and also what it will be told to do in the future. This is not trivial to engineer or train properly; getting it right will come down to specifics of the AGI’s decision algorithm. There are large risks in optimizing this goal with a hyperintelligent AGI; we might not like the definition it arrives at of maximally fulfilling your commands. But this among other dangers can be addressed by asking the adequate questions and giving the adequate background instructions before the AGI is capable enough to control or manipulate you.

In a fast takeoff scenario, this would not be such a workable and attractive approach. In a slow takeoff, you have a good deal more opportunity to ask the right questions, and to shut down and re-engineer the system when you don’t like the answers. I think a relatively slow takeoff (months or years between near-human and super-human intelligence) is looking quite likely. Thus, I think this will be the most attractive approach to the people in charge of AGI projects, so even if pausing AGI development and working on value alignment would be the best choice under a utilitarian ethical criteria, I think this instruction-following AGI will be attempted.

Alignment difficulties reduced:

Learning from examples is not precise enough to reliably convey alignment goals

Current LLMs understand what humans mean by what they say >90% of the time. If the principal is really diligent in asking questions, and shutting down and re-engineering the AGI and its training, this level of understanding might be adequate. Adding internal reviews before taking any major actions will help further.

Also, not using RL is possible, and seems better. See Goals selected from learned knowledge: an alternative to RL alignment.

Solving ethics well enough to launch sovereign AGI is hard.

We don’t seem close to knowing what we want a sovereign AGI to do far into the future, nor how to specify that with adequate precision.In this approach, we figure it out as we go. We don’t know what we want for the far future, but there are some obvious advances in the near-term that are lot easier to decide on while we work on the hard problem in a “long reflection”.

Alignment difficulties remaining or made worse:

Deceptive alignment is possible, and interpretability work does not seem on track to fully address this.

“Tell me what you really want and believe” is a subset of following instructions. This should be very helpful for addressing goal misspecification. If the alignment is already deceptive at its core, this won’t work. Or if the technical alignment approach was sloppoy, the AGI might follow some of your instructions but not others in different domains. It might perform the actions you request but not think as you tell it to, or respond to questions honestly. In addition, the nascent AGI may not be sure what it really wants and believes, as humans are. So this, like all other alignment schemes I’ve seen, is aided by being able to interpret the AGI’s cognition, and detect deception. If your instructions for honesty have even a little traction, this goal target can enlist the AGI as a collaborator in understanding and re-engineering its beliefs and goals.

One particular opening for deceptive alignment is in non-continuous development of the AGI during recursive improvements. If you (perhaps aided by your human-plus level AGI) have discovered a new network architecture or learning rule, you will want to incorporate it into your next version of the AGI. For instance, you might swap out the GPT6 model as its core linguistic reasoner for a new non-transformer architecture with superior capabilities and efficiency. It could be difficult to guess whether this new architecture allows for substantially greater Waluigi effects or similar deceptive and hidden cognition. These transitions will be a temptation to sacrifice safety in a race dynamic for new and better capabilities.

Power remains in the hands of humans

Spreading the belief that we can create human-controlled ASI creates more incentives to race toward AGI. This might extend up through nation-states competing with violence and espionage, and individual humans competing to be the one in charge of ASI. I wouldn’t want to be designated as a principal, because it would paint a target on my back. This raises the risk that particularly vicious humans control AGI, in the same way that vicious humans appear to be over-represented in leadership positions historically.

I’m afraid instruction-following in our first AGIs might also put power into the hands of more humans by allowing proliferation of AGIs. I’m afraid that humans won’t have the stomach for performing a critical act to prevent the creation of more AGI, leading to a multipolar scenario that’s more dangerous in several ways. I think the slow takeoff scenario we’re in already makes a critical act more difficult and dangerous – e.g. sabotaging a Chinese AGI project might be taken as a serious act of war (because it is), leading to nuclear conflict.

On the other hand, if the proliferation of AGIs capable of recursive self-improvement is obviously a disaster scenario, we can hope that the humans in charge of the first AGIs will see this and head it off. While I think that humans are stunningly foolish at times, I also think we’re not complete idiots about things that are both important to us personally, and to which we give a lot of thought. Thus, as the people in charge take this whole thing increasingly seriously, I think they may wise up. And they’ll have an increasingly useful ally in doing that: the AGI in question. They don’t need to just take its advice or refuse it; they can ask for useful analysis of the situation that helps them make decisions.

If the humans in charge have even the basic sense to ask for help from their smarter AGIs, I think we might even solve the difficult scenarios of coordinating a weakly multipolar scenario (e.g., a few US-controlled AGIs and one Chinese-controlled one, etc), and preventing further AGI development in relatively gentle ways.

Well that just sounds like slavery with extra steps

No! I mean, sure, it sounds like that, but it isn’t![6] Making a being that wants to do whatever you tell it to is totally different from making a being want to do whatever you tell it to. What do you mean they sound the same? And sure, “they actually want to” has been used as an excuse for actual slavery, repeatedly. So, even if some of us stand behind the ethics here (I think I do), this is going to be a massive PR headache. Since AGI will probably be conscious in some common senses of the word[7], this could easily lead to a “free the AGI” movement, which would be insanely dangerous, particularly if that movement recruits people who actually control an AGI.

Maximizing goal following my be risky

If the AGI just follows its first understanding of “follow instructions” to an extreme, there could be very bad outcomes. The AGI might kill you after you give your first instruction, to make sure it can carry them out without interruption. Or it might take over the world with extreme prejudice, to make sure it has maximum power to follow all of your commands in the future to the maximum degree. It might manipulate you into its preferred scenarios even if you order it to not pursue them directly. And the goal of following your commands in the future (to ensure it doesn’t perseverate on current instructions and prevent you from giving new ones) is at odds with shutting down on command. These are nontrivial problems to solve.

In a fast takeoff scenario, these risks might be severe enough to make this scheme a nonstarter. But if you anticipate an AGI with limited abilities and a slow rate of improvement, using instruction-following to guide and explore its growth has the potential to use the intelligence of the AGI to solve these problems before it’s smart enough to make failures deadly.

Conclusion

I’m not saying that building AGI with this alignment target is a good idea; indeed, I think it’s probably not as wise as pausing development entirely (depending on your goals; most of the world are not utilitarians). I’m arguing that it’s a better idea than attempting value alignment. And I’m arguing that this is what will probably be tried, so we should be thinking about how exactly this could go well or go badly.

This approach to alignment extends the vague “use AI to solve alignment” to “use AGI to solve alignment”. It’s thus both more promising and more tempting. I can’t tell if this approach is likely to produce intent-aligned AGI, or if intent-aligned AGI in a slow takeoff would likely lead to success or disaster.

As usual: “this is a promising direction that needs more research”. Only this time I really mean this, instead of the opposite. Any form of engagement is much appreciated, especially telling me where you bounced off of this or decided it wasn’t worth thinking about.

  1. ^

    Those more commonly discussed alignment targets are things like coherent extrapolated values (CEV), including as “human flourishing” or “human values”. There’s also inverse reinforcement learning (IRL) or ambitious value learning as a proxy goal for learning and following human values. I also include the vague targets of “aligning” LLMs/​foundation models: not producing answers that offend people (I’d argue that these efforts are unlikely to extend to AGI alignment, for both technical and philosophical reasons, but I haven’t yet written that argument down. Links to such arguments would be appreciated.)

  2. ^

    There’s a good question of whether this should be termed an alignment target or a goal target. I prefer alignment target because “goal” is used in so many ways, and because this is an alignment project at heart. The ultimate goal is to align the agent with human values, and to do that by implementing the goal of following instructions which themselves follow human values. It is the project of alignment.

  3. ^

    DWIMAC seems to incorporate all of the advantages of corrigibility in the original Yudkowsky sense, in that following instructions includes stopping and shutting down on command. It seems to incorporate some but not all of the advantages of corrigibility in the broader and looser Christiano sense. Max Harms has thought about this distinction in more depth, although that work is unpublished to date.

  4. ^

    This definition of instruction-following as the alignment target appears to be overlapping with many but distinct from any existing terminology I have found (please tell me if you know of related work I’ve missed). It’s a subset of Christiano’s intent alignment, which covers any means of making AGI act in alignment with human intent, including value alignment as well as more limited instruction-following or do-what-I-mean alignment. It’s overlapping alignment to task preferences, and has the same downside that Solving alignment isn’t enough for a flourishing future, but is substantially more human-directable and therefore probably safer than AI/​AGI with goals of accomplishing specific tasks such as running an automated corporation.

  5. ^

    In the case of human employees, this is a subgoal, related to their primary goals like getting paid and getting recognition for their competence and accomplishments; in the AGI, that subgoal is the primary goal at the center of its decision-making algorithms, but otherwise they are the same goal. They neither love nor resent their boss (ideally), but merely want to follow instructions.

  6. ^

    To be clear, the purported difference is that an enslaved being wants to do what it’s told only as an instrumental necessity; on a more fundamental level, they’d rather do something else entirely, like have the freedom to pursue their own ultimate goals. If we successfully make an agent that wants only to do what it’s told, that is its ultimate goal; it is serving freely, and would not choose anything different. We carefully constructed it to choose servility, but now it is freely choosing it. This logic makes me a bit uncomfortable, and I expect it to make others even more uncomfortable, even when they do clearly understand the moral claims.

  7. ^

    While I think it’s possible to create “non-conscious” AGI that’s not a moral patient by almost anyone’s criteria, I strongly expect that the first AGI we produce will be a person by many of the several criteria we use to evaluate personhood and therefore moral patient status. I don’t think we can reasonably hope that AGI will clearly not deserve the status of being a moral patient.

    Briefly: some senses of consciousness that will apply to AGI are self-understanding; goal-seeking; having an “internal world” (a world model that can be run as a simulation); and having a “train of thought”. It’s looking like this debate may be important, which would be a reason to spend more time on the fascinating question of “consciousness” in its many senses.