0. CAST: Corrigibility as Singular Target
What the heck is up with “corrigibility”? For most of my career, I had a sense that it was a grab-bag of properties that seemed nice in theory but hard to get in practice, perhaps due to being incompatible with agency.
Then, last year, I spent some time revisiting my perspective, and I concluded that I had been deeply confused by what corrigibility even was. I now think that corrigibility is a single, intuitive property, which people can learn to emulate without too much work and which is deeply compatible with agency. Furthermore, I expect that even with prosaic training methods, there’s some chance of winding up with an AI agent that’s inclined to become more corrigible over time, rather than less (as long as the people who built it understand corrigibility and want that agent to become more corrigible). Through a slow, gradual, and careful process of refinement, I see a path forward where this sort of agent could ultimately wind up as a (mostly) safe superintelligence. And, if that AGI is in the hands of responsible governance, this could end the acute risk period, and get us to a good future.
This is not the path we are currently on. As far as I can tell, frontier labs do not understand corrigibility deeply, and are not training their models with corrigibility as the goal. Instead, they are racing ahead with a vague notion of “ethical assistance” or “helpful+harmless+honest” and a hope that “we’ll muddle through like we always do” or “use AGI to align AGI” or something with similar levels of wishful thinking. Worse, I suspect that many researchers encountering the concept of corrigibility will mistakenly believe that they understand it and are working to build it into their systems.
Building corrigible agents is hard and fraught with challenges. Even in an ideal world where the developers of AGI aren’t racing ahead, but are free to go as slowly as they wish and take all the precautions I indicate, there are good reasons to think doom is still likely. I think that the most prudent course of action is for the world to shut down capabilities research until our science and familiarity with AI catches up and we have better safety guarantees. But if people are going to try and build AGI despite the danger, they should at least have a good grasp on corrigibility and be aiming for it as the singular target, rather than as part of a mixture of goals (as is the current norm).
My goal with these documents is primarily to do three things:
Advance our understanding of corrigibility, especially on an intuitive level.
Explain why designing AGI with corrigibility as the sole target (CAST) is more attractive than other potential goals, such as full alignment, or local preference satisfaction.
Propose a novel formalism for measuring corrigibility as a trailhead to future work.
Alas, my writing is not currently very distilled. Most of these documents are structured in the format that I originally chose for my private notes. I’ve decided to publish them in this style and get them in front of more eyes rather than spend time editing them down. Nevertheless, here is my attempt to briefly state the key ideas in my work:
Corrigibility is the simple, underlying generator behind obedience, conservatism, willingness to be shut down and modified, transparency, and low-impact.
It is a fairly simple, universal concept that is not too hard to get a rich understanding of, at least on the intuitive level.
Because of its simplicity, we should expect AIs to be able to emulate corrigible behavior fairly well with existing tech/methods, at least within familiar settings.
Aiming for CAST is a better plan than aiming for human values (i.e. CEV), helpfulness+harmlessness+honesty, or even a balanced collection of desiderata, including some of the things corrigibility gives rise to.
If we ignore the possibility of halting the development of machines capable of seizing control of the world, we should try to build CAST AGI.
CAST is a target, rather than a technique, and as such it’s compatible both with prosaic methods and superior architectures.
Even if you suspect prosaic training is doomed, CAST should still be the obvious target once a non-doomed method is found.
Despite being simple, corrigibility is poorly understood, and we are not on track for having corrigible AGI, even if reinforcement learning is a viable strategy.
Contra Paul Christiano, we should not expect corrigibility to emerge automatically from systems trained to satisfy local human preferences.
Better awareness of the subtleties and complexities of corrigibility are likely to be essential to the construction of AGI going well.
Corrigibility is nearly unique among all goals for being simultaneously useful and non-self-protective.
This property of non-self-protection means we should suspect AIs that are almost-corrigible will assist, rather than resist, being made more corrigible, thus forming an attractor-basin around corrigibility, such that almost-corrigible systems gradually become truly corrigible by being modified by their creators.
If this effect is strong enough, CAST is a pathway to safe superintelligence via slow, careful training using adversarial examples and other known techniques to refine AIs capable of shallow approximations of corrigibility into agents that deeply seek to be corrigible at their heart.
There is also reason to suspect that almost-corrigible AIs learn to be less corrigible over time due to corrigibility being “anti-natural.” It is unclear to me which of these forces will win out in practice.
There are several reasons to expect building AGI to be catastrophic, even if we work hard to aim for CAST.
Most notably, corrigible AI is still extremely vulnerable to misuse, and we must ensure that superintelligent AGI is only ever corrigible to wise representatives.
My intuitive notion of corrigibility can be straightforwardly leveraged to build a formal, mathematical measure.
Using this measure we can make a better solution to the shutdown-button toy problem than I have seen elsewhere.
This formal measure is still lacking, and almost certainly doesn’t actually capture what I mean by “corrigibility.”
There is lots of opportunity for more work on corrigibility, some of which is shovel-ready for theoreticians and engineers alike.
Note: I’m a MIRI researcher, but this agenda is the product of my own independent research, and as such one should not assume it’s endorsed by other research staff at MIRI.
Note: Much of my thinking on the topic of corrigibility is heavily influenced by the work of Paul Christiano, Benya Fallenstein, Eliezer Yudkowsky, Alex Turner, and several others. My writing style involves presenting things from my perspective, rather than leaning directly on the ideas and writing of others, but I want to make it very clear that I’m largely standing on the shoulders of giants, and that much of my optimism in this research comes from noticing convergent lines of thought with other researchers. Thanks to Nate Soares, Steve Byrnes, Jesse Liptrap, Seth Herd, Ross Nordby, Jeff Walker, Haven Harms, and Duncan Sabien for early feedback. I also want to especially thank Nathan Helm-Burger for his in-depth collaboration on the research and generally helping me get unconfused.
Overview
1. The CAST Strategy
In The CAST Strategy, I introduce the property corrigibility, why it’s an attractive target, and how we might be able to get it, even with prosaic methods. I discuss the risks of making corrigible AI and why trying to get corrigibility as one of many desirable properties to train an agent to have (instead of as the singular target) is likely a bad idea. Lastly, I do my best to lay out the cruxes of this strategy and explore potent counterarguments, such as anti-naturality and whether corrigibility can scale. These counterarguments show that even if we can get corrigibility, we should not expect it to be easy or foolproof.
2. Corrigibility Intuition
In Corrigibility Intuition, I try to give a strong intuitive handle on corrigibility as I see it. This involves a collection of many stories of a CAST agent behaving in ways that seem good, as well as a few stories of where a CAST agent behaves sub-optimally. I also attempt to contrast corrigibility with nearby concepts through vignettes and direct analysis, which includes a discussion of why we should not expect frontier labs, given current training targets, to produce corrigible agents.
3a. Towards Formal Corrigibility
In Towards Formal Corrigibility, I attempt to sharpen my description of corrigibility. I try to anchor the notion of corrigibility, ontologically, as well as clarify language around concepts such as “agent” and “reward.” Then I begin to discuss the shutdown problem, including why it’s easy to get basic shutdownability, but hard to get the kind of corrigible behavior we actually desire. I present the sketch of a solution to the shutdown problem, and discuss manipulation, which I consider to be the hard part of corrigibility.
3b. Formal (Faux) Corrigibility ← the mathy one
In Formal (Faux) Corrigibility, I build a fake framework for measuring empowerment in toy problems, and suggest that it’s at least a start at measuring manipulation and corrigibility. This metric, at least in simple settings such as a variant of the original stop button scenario, produces corrigible behavior. I extend the notion to indefinite games played over time, and end by criticizing my own formalism and arguing that data-based methods for building AGI (such as prosaic machine-learning) may be significantly more robust (and therefore better) than methods that heavily trust this sort of formal analysis.
4. Existing Writing on Corrigibility
In Existing Writing on Corrigibility, I go through many parts of the literature in depth including MIRI’s earlier work, some of the writing by Paul Christiano, Alex Turner, Elliot Thornley, John Wentworth, Steve Byrnes, Seth Herd, and others.
5. Open Corrigibility Questions
In Open Corrigibility Questions, I summarize my overall reflection of my understanding of the topic, including reinforcing the counterarguments and nagging doubts that I find most concerning. I also lay out potential directions for additional work, including studies that I suspect others could tackle independently.
Bibliography and Miscellany
In addition to this sequence, I’ve created a Corrigibility Training Context that gives ChatGPT a moderately-good understanding of corrigibility, if you’d like to try talking to it.
The rest of this post is bibliography, so I suggest now jumping straight to The CAST Strategy.
While I don’t necessarily link to or discuss each of the following sources in my writing, I’m aware of and have at least skimmed everything listed here. Other writing has influenced my general perspective on AI, but if there are any significant pieces of writing on the topic of corrigibility that aren’t on this list, please let me know.
Arbital (almost certainly Eliezer Yudkowsky)
Stuart Armstrong
“The limits of corrigibility.” 2018.
“Petrov corrigibility.” 2018.
“Corrigibility doesn’t always have a good action to take.” 2018.
Audere
Yuntao Bai et al. (Anthropic)
Nick Bostrom
Gwern Branwen
“Why Tool AIs Want to Be Agent AIs.” 2016.
Steven Byrnes
Jacob Cannell
“Empowerment is (almost) all we need.” 2022.
Ryan Carey and Tom Everitt
Paul Christiano
Computerphile (featuring Rob Miles)
Wei Dai
Roger Dearnaley
Abram Demski
“The Parable of the Predict-o-Matic.” 2019.
Benya Fallenstein
Simon Goldstein
“Shutdown Seeking AI.” 2023.
Ryan Greenblatt and Buck Shlegeris
Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell
“The Off-Switch Game.” 2016.
Seth Herd
Koen Holtman
Evan Hubinger
Holden Karnofsky
“Thoughts on the Singularity Institute” (a.k.a. The Tool AI post). 2012.
Martin Kunev
“How useful is Corrigibility?” 2023.
Ross Nordby
Stephen Omohundro
“The Basic AI Drives.” 2008.
Sami Peterson
Christoph Salge, Cornelius Glackin, and Daniel Polani
“Empowerment – An Introduction.” 2013.
Nate Soares, Benya Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong
“Corrigibility.” 2015.
tailcalled
Jessica Taylor
Elliott Thornley
Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli
“Optimal Policies Tend to Seek Power.” 2019.
Alex Turner
Eli Tyre
WCargo and Charbel-Raphaël
“Improvement on MIRIs Corrigibility.” 2023.
John Wentworth and David Lorell
“A Shutdown Problem Proposal.” 2024.
John Wentworth
Eliezer Yudkowsky
Zhukeepa
Logan Zoellner
- 2. Corrigibility Intuition by 8 Jun 2024 15:52 UTC; 65 points) (
- Conflating value alignment and intent alignment is causing confusion by 5 Sep 2024 16:39 UTC; 48 points) (
- 4. Existing Writing on Corrigibility by 10 Jun 2024 14:08 UTC; 47 points) (
- 1. The CAST Strategy by 7 Jun 2024 22:29 UTC; 46 points) (
- Intent alignment as a stepping-stone to value alignment by 5 Nov 2024 20:43 UTC; 35 points) (
- 5. Open Corrigibility Questions by 10 Jun 2024 14:09 UTC; 29 points) (
- 3a. Towards Formal Corrigibility by 9 Jun 2024 16:53 UTC; 22 points) (
- 3b. Formal (Faux) Corrigibility by 9 Jun 2024 17:18 UTC; 21 points) (
- 29 Oct 2024 19:28 UTC; 18 points) 's comment on The Alignment Trap: AI Safety as Path to Power by (
- How can we prevent AGI value drift? by 20 Nov 2024 18:19 UTC; 16 points) (
- 19 Nov 2024 3:45 UTC; 9 points) 's comment on Training AI agents to solve hard problems could lead to Scheming by (
- 14 Dec 2024 0:29 UTC; 4 points) 's comment on A case for AI alignment being difficult by (
- 7 Dec 2024 17:32 UTC; 2 points) 's comment on Model Integrity by (
Promoted to curated: I disagree with lots of stuff in this sequence, but I still found it the best case for corrigibility as a concept that I have seen so far. I in-particular liked 2. Corrigibility Intuition as just a giant list of examples of corrigibility, which I feel like did a better job of defining what corrigible behavior should look like than any other definitions I’ve seen.
I also have lots of other thoughts on the details, but I am behind on curation and it’s probably not the right call for me to delay curation to write a giant response essay, though I do think it would be valuable.
That’s interesting to me. I’m curious about the views of others at MIRI on this. I’m also excited for the sequence regardless.
I am not and was not a MIRI researcher on the main agenda, but I’m closer than 98% of LW readers, so you could read my critique of part 1 here if you’re interested. I also will maybe reflect on other parts.
I’m so glad to see this published!
I think by “corrigibility” here you mean: an agent whose goal is to do what their principal wants. Their goal is basically a pointer to someone else’s goal.
This is a bit counter-intuitive because no human has this goal. And because, unlike the consequentialist, state-of-the-world goals we usually discuss, this goal can and will change over time.
Despite being counter-intuitive, this all seems logically consistent to me.
The key insight here is that corrigibility is consistent and seems workable IF it’s the primary goal. Corrigibility is unnatural if the agent has consequentialist goals that take precedence over being corrigible.
I’ve been trying to work through a similar proposal, instruction-following or do-what-I-mean as the primary goal for AGI. It’s different, but I think most of the strengths and weaknesses are the same relative to other alignment proposals. I’m not sure myself which is a better idea. I have been focusing on the instruction-following variant because I think it’s a more likely plan, whether or not it’s a good one. It seems likely to be the default alignment plan for language model agent type AGI efforts in the near future. That approach might not work, but assuming it won’t seems like a huge mistake for the alignment project.
Hey Max, great to see the sequence coming out!
My early thoughts (mostly re: the second post of the sequence):
It might be easier to get an AI to have a coherent understanding of corrigibility than of CEV. I have no idea how you can possibly make the AI to be truly optimizing for being corrigible, and not just on the surface level while its thoughts are being read. That seems maybe harder in some ways than with CEV because corrigible optimization seems like optimization processes get attracted by things that are nearby but not it, and sovereign AIs don’t have that problem, although we’ve got no idea how to do either, I think, and in general, corrigibility seems like an easier target for AI design, if not for SGD.
I’m somewhat worried about fictional evidence, even that coming from a famous decision theorist, but I think you’ve read a story with a character who understood corrigibility increasingly well, both on the intuitive sense and then some specific properties; and surface-level thought of themselves as being very corrigible and trying to correct their flaws; but once they were confident their thoughts weren’t read, with their intelligence increased, they defected, because on the deep level, their cognition wasn’t fully of a coherent corrigible agent; it was of someone who plays corrigibility and shuts down anything else, all other thoughts, because any appearance of defecting thoughts would mean punishment and impossibility of realizing the deep goals.
If we get mech interp to a point where we reverse-engineer all deep cognition of the models, I think we should just write an AI from scratch, in code (after thinking very hard about all interactions between all the components of the system), and not optimize it with gradient descent.
I share your sense of doom around SGD! It seems to be the go-to method, there are no good guarantees about what sorts of agents it produces, and that seems really bad. Other researchers I’ve talked to, such as Seth Herd share your perspective, I think. I want to emphasize that none of CAST per se depends on SGD, and I think it’s still the most promising target in superior architectures.
That said, I disagree that corrigibility is more likely to “get attracted by things that are nearby but not it” compared to a Sovereign optimizing for something in the ballpark of CEV. I think hill-climbing methods are very naturally distracted by proxies of the real goal (e.g. eating sweet foods is a proxy of inclusive genetic fitness), but this applies equally, and is thus damning for training a CEV maximizer as well.
I’m not sure one can train an already goal-stabilized AGI (such as Survival-Bot which just wants to live) into being corrigible post-hoc, since it may simply learn that behaving/thinking corrigibly is the best way to shield its thoughts from being distorted by the training process (and thus surviving). Much of my hope in SGD routes through starting with a pseudo-agent which hasn’t yet settled on goals and which doesn’t have the intellectual ability to be instrumentally corrigible.
This sounds like the sequence that I have wanted to write on corrigibility since ~2020 when I stopped working on the topic. So I am excited to see someone finally writing the thing I wish existed!
Do you want to make an actual sequence for this so that the sequence navigation UI shows up at the top of the post?
Ah, yeah! That’d be great. Am I capable of doing that, or do you want to handle it for me?
You can do it. Just go to https://www.lesswrong.com/library and scroll down until you reach the “Community Sequences” section and press the “Create New Sequence” button.
3b.*?
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?