You can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on.
This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been working on this”, “I wonder roughly how many people are working on that thing”.
This doc is unreadably long, so that it can be Ctrl-F-ed. Also this way you can fork the list and make a smaller one.
Our taxonomy:
Understand existing models (evals, interpretability, science of DL)
Control the thing (prevent deception, model edits, value learning, goal robustness)
Make AI solve it (scalable oversight, cyborgism, etc)
Theory (galaxy-brained end-to-end, agency, corrigibility, ontology, cooperation)
Please point out if we mistakenly round one thing off to another, miscategorise someone, or otherwise state or imply falsehoods. We will edit.
Unlike the late Larks reviews, we’re not primarily aiming to direct donations. But if you enjoy reading this, consider donating to Manifund, MATS, or LTFF, or to Lightspeed for big ticket amounts: some good work is bottlenecked by money, and you have free access to the service of specialists in giving money for good work.
Meta
When I (Gavin) got into alignment (actually it was still ‘AGI Safety’) people warned me it was pre-paradigmatic. They were right: in the intervening 5 years, the live agendas have changed completely.[1] So here’s an update.
Chekhov’s evaluation: I include Yudkowsky’s operational criteria (Trustworthy command?, closure?, opsec?, commitment to the common good?, alignment mindset?) but don’t score them myself. The point is not to throw shade but to remind you that we often know little about each other.
See you in 5 years.
Editorial
Alignment is now famous enough that Barack Obama is sort of talking about it. This will attract climbers, grifters, goodharters and those simply misusing the word because it’s objectively confusing and attracts money and goodwill. We already had to half-abandon “AI safety” because of motivated semantic creep.
Low confidence: Mech interp probably has its share of people by now (though I accept that it is an excellent legible [ha] on-ramp and there’s lots of pre-chewed projects ready to go).
MATS works well (on average, with high variance). The London extension is a very good idea. They just got $185k from SFF but are still constrained.
Whole new types of people are contributing, which is nice. I have in mind PIBBSS and the CAIS philosophers and the SLT mob and Eleuther’s discordant energy.
The big labs seem to be betting the farm on scalable oversight. This relies on no huge capabilities spikes and no irreversible misgeneralisation.
The de facto agenda of the uncoordinated and only-partially paradigmatic field is process-based supervision / defence in depth / hodgepodge / endgame safety / Shlegeris v1. We will throw together a dozen things which work in sub-AGIs and hope: RLHF/DPO + massactivation patching + scoping models down + boxing + dubiously scalable oversight + myopic training + data curation + passable automated alignment research (proof assistants) + … We will also slow things down by creating a (hackable, itself slow OODA) safety culture. Who knows.
One-sentence summary: make tools that can actually check whether a model has a certain capability / misalignment mode. We default to low-n sampling of a vast latent space but aim to do better.
Theory of change: most models have a capabilities overhang when first trained and released; we should keep a close eye on what capabilities are acquired when so that frontier model developers are better informed on what security measures are already necessary (and hopefully they extrapolate and eventually panic).
Some names: Mary Phuong, Toby Shevlane, Beth Barnes, Holden Karnofsky, Lawrence Chan, Owain Evans, Francis Rhys Ward, Apollo, Palisade, OAI Preparedness
One-sentence summary: let’s attack current models and see what they do / deliberately induce bad things on current frontier models to test out our theories / methods. See also gain of function experiments (producing demos and toy models of misalignment. See also “Models providing Critiques”. See also: threat modelling (Model Organisms,Powerseeking, Apollo); steganography; part of OpenAI’s superalignment schedule; Trojans (CAIS); Latent Adversarial Training is an unusual example.
Some names: Stephen Casper, Lauro Langosco, Jacob Steinhardt, Nina Rimsky, Jeffrey Ladish/Palisade, Ethan Perez, Geoffrey Irving, ARC Evals, Apollo, Dylan Hadfield-Menell/AAG
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: Large
Eliciting model anomalies
One-sentence summary: finding weird features of current models, in a way which isn’t fishing for capabilities nor exactly red-teaming. Think inverse Scaling, SolidGoldMagikarp, Reversal curse, out of context. Not an agenda but a multiplier on others.
Theory of change: maybe anomalies and edge cases tell us something deep about the models; you need data to theorise.
One-sentence summary: understand LLM interactions, their limits, and work up from empirical work towards more general hypotheses about complex systems of LLMs, such as network effects in hybrid systems and scaffolded models.
Theory of change: Aggregates are sometimes easier to predict / theorise than individuals: the details average out. So experiment with LLM interactions (manipulation, conflict resolution, systemic biases etc). Direct research towards LLM interactions in future large systems (in contrast to the current singleton focus); prevent systemic bad design and inform future models.
Some names: Jan Kulveit, Tomáš Gavenčiak, Ada Böhm
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$300,000
The other evals (groundwork for regulation)
Much of Evals and Governance orgs’ work is something different: developing politically legiblemetrics, processes / shockingcase studies. The aim is to motivate and underpin actually sensible regulation.
But this is a technical alignment post. I include this section to emphasise that these other evals (which seek confirmation) are different from understanding whether dangerous capabilities have or might emerge.
Interpretability
(Figuring out what a trained model is actually computing.)[2]
In the sense of complete bottom-up circuit-level reconstruction of learned algorithms.
One-sentence summary: find circuits for everything automatically, then figure out if the model will do bad things (which algorithm implementing which plan; a full causal graph with a sensible number of nodes); any model that will do bad things can then be deleted or edited.
Theory of change: aid alignment through ontology identification, auditing for deception and planning, force-multiplier for alignment research, intervening to make training safer, inference-time controls to act on hypothetical real-time monitoring. Iterate towards things which don’t scheme. See also scalable oversight.
Some names: Chris Olah, Lee Sharkey, Neel Nanda, Steven Bills, Nick Cammarata, Leo Gao, William Saunders, Apollo (private work)
One-sentence summary: if ground-up understanding of models turns out to be too hard/impossible, we might still be able to jump in at some high level of abstraction and still steer away from misaligned AGI. AKA “high-level interpretability”.
Theory of change: build tools that can output a probable and predictive representation of internal objectives or capabilities of a model, thereby solving inner alignment.
Some names: Erik Jenner, Jessica Rumbelow, Stephen Casper, Arun Jose, Paul Colognese
One-sentence summary: Wentworth 2020; partially describe the “algorithm” a neural network or other computation is using, while throwing away irrelevant details.
Theory of change: find all possible abstractions of a given computation → translate them into human-readable language → identify useful ones like deception → intervene when a model is doing it. Also develop theory for interp more broadly as a multiplier; more mathematical (hopefully, more generalizable) analysis.
One-sentence summary: intervene on model representations and so get good causal evidence when dishonesty, powerseeking, and other intrinsic risks show up; also test interpretability theories and editing theories. See also the section of the same name under “Model edits” below.
Theory of change: test interpretability theories as part of that theory of change; find new insights from interpretable causal interventions on representations. Unsupervised means no annotation bias, which lowers one barrier to extracting superhuman representations.
Some names: Alex Turner, Collin Burns, Andy Zou, Kaarel Hänni, Walter Laurito, Cadenza (manifund)
One-sentence summary: research startup selling an interpretability API (model-agnostic feature viz of vision models). Aiming for data-independent (“want to extract information directly from the model with little dependence on training or test data”) and global (“mech interp isn’t going to be enough, we need holistic methods that capture gestalt”) interpretability methods.
Theory of change: make safety tools people want to use, stress-test methods in real life, develop a strong alternative to bottom-up circuit analysis.
One-sentence summary: Build tools for detecting, locating, and interpreting phase transitions that govern training and in-context learning in models, inspired by concepts in singular learning theory (SLT), statistical physics, and developmental biology.
Theory of change: When structure forms in neural networks, it can leave legible developmental traces that we can interpret to figure out where and how that structure is implemented. This paves a way to scalable, automated interpretability. In particular, it may be hopeless to intervene at the end of the learning process, so we want to catch and prevent deceptiveness and other dangerous capabilities and values as early as possible.
Some names: Jesse Hoogland, Alexander Gietelink Oldenziel, Daniel Murfet, Stan van Wingerden
One-sentence summary: nudging base models by optimising their output. (RLHF, Constitutional, DPO, SFT, HHH, RLAIF.) Not really an agenda, but part of agendas, like Korbak or Lindner, or Redwood’s defunct harmlessness finetunes, or the Karnofsky plan. I like the name “blind output alignment” for this but “prosaic alignment” is well-established.
Imputed assumptions: things are generally smooth, relevant capabilities are harder than alignment, assume no mesaoptimisers, that zero-shot deception is hard, assume a humanish ontology is learned, assume no simulated agents, assume that noise in the data means that human preferences are not ruled out. Assume that alignment is a superficial feature. Maybe assume that thoughts are translucent.
Theory of change (steelman): we can observe some forms of overoptimisation and correct for them, which might reduce misgeneralisation risk, maybe a lot.
One-sentence summary: evaluate the humans/org instead: assess the risk of a particular protocol for deploying your model in a particular scenario, by silently swapping in an intentionally misaligned model. Play out the scenario in sim until 1) a catastrophe occurs, 2) your protocol confidently spots the red team model, or 3) the scenario ends without catastrophe. Also do adversarial evaluation (monitoring untrusted model outputs with both trusted and untrusted models). Predecessor.
Theory of change: prevent high-stakes failures by automating risk assessment of eval and deployment plans.
Some names: Buck Shlegeris, Ryan Greenblatt, Kshitij Sachan
One-sentence summary: Soft boxing. Just as people are wrapping LLMs in tooling to boost their capabilities, so can we put up security layers: detectors and classifiers and censors and anomaly detectors and debate partners and “trusted dumb agents” and so on. See also process-based supervision.
Theory of change: beating every scaffold is conjunctive (and some of the scaffolds are fairly smart), so takeover attempts are more likely to be caught.
Some names: Buck Shlegeris, Fabien Roger. (Lots of people are doing this de facto but Redwood are the ones reifying it in public.) See also OAI Preparedness.
One-sentence summary: measurement tampering is where the AI system manipulates multiple measurements to create the illusion of good results instead of achieving the desired outcome.
Theory of change: find out when measurement tampering occurs → build models that don’t do that.
See also CAIS
Some names: Fabien Roger, Ryan Greenblatt, Max Nadeau, Buck Shlegeris, Nate Thomas
One-sentence summary: build tools to find whether a model will misbehave in high stakes circumstances by looking at it in testable circumstances. This bucket catches work on lie classifiers, sycophancy, Scaling Trends For Deception.
Theory of change: maybe we can catch a misaligned model by observing dozens of superficially unrelated parts, or tricking it into self-reporting, or by building the equivalent of brain scans.
Some names: Dan Hendrycks, Owain Evans, Jan Brauner, Sören Mindermann. See also Apollo, CAIS, CAIF, and the two activation engineering sections in this post.
One-sentence summary: Train models that print their actual reasoning in English (or another language we can read) every time. Give negative reward for dangerous-seeming reasoning, or just get rid of models that engage in it.
Theory of change: “Force a language model to think out loud, and use the reasoning itself as a channel for oversight. If this agenda is successful, it could defeat deception, power-seeking, and other disapproved reasoning.”
One-sentence summary: let’s see if we can programmatically modify activations to steer outputs towards what we want, in a way that generalises across models and topics. As much or more an intervention-based approach to interpretability than about control (see above).
Theory of change: maybe simple things help: let’s build more stuff to stack on top of finetuning. Activations are the last step before output and so interventions on them are less pre-emptable. Slightly encourage the model to be nice, add one more layer of defence to our bundle of partial alignment methods.
Some names:Alex Turner, Andy Zou, Nina Rimsky, Claudia Shi, Léo Dana, Ole Jorgensen. See also Li and BauLab.
One-sentence summary: Social and moral instincts are (partly) implemented in particular hardwired brain circuitry; let’s figure out what those circuits are and how they work; this will involve symbol grounding. Newest iteration of a sustained and novel agenda.
Theory of change: Fairly direct alignment via changing training to reflect actual human reward. Get actual data about (reward, training data) → (human values) to help with theorising this map in AIs; “understand human social instincts, and then maybe adapt some aspects of those for AGIs, presumably in conjunction with other non-biological ingredients”.
Some names: Steve Byrnes
Estimated # FTEs: 1
Some outputs in 2023:
Critiques: ?
Funded by: Astera,
Trustworthy command, closure, opsec, common good, alignment mindset: ?
One-sentence summary: train models on human behaviour (such as monitoring which keys a human presses when in response to what happens on a computer screen); contrast with Strouse.
Theory of change: humans learn well by observing each other → let’s test whether AIs can learn by observing us → outer alignment and moonshot at safe AGI.
If you squint, this is what ‘alignment by default’ is doing, in the form of a self-supervised learning imitating the human web corpus. But the proposed algorithms in imitation learning proper are very different and more obviously Bayesian.
Some names: Jérémy Scheurer, Tomek Korbak, Ethan Perez
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Reward learning
One-sentence summary: People like CHAI are still looking at reward learning to “reorient the general thrust of AI research towards provably beneficial systems”. (They are also doing a lot of advocacy, like everyone else.)
Theory of change: understand what kinds of things can go wrong when humans are directly involved in training a model → build tools that make it easier for a model to learn what humans want it to learn.
See also RLHF and recursive reward modelling, the industrialised forms.
One-sentence summary: let’s build models that can recognize when they are out of distribution (or at least give us tools to notice when they are). See also anomaly detection.
Theory of change: bad things happen if powerful AI “learns the wrong lesson” from training data, we should make it not do that.
Some names: Steinhardt, Tegan Maharaj, Irina Rish.Maybethis.
One-sentence summary: continual learning to make model internals more stable as they learn / as the world changes; safely extending features an agent has learned in training to new datasets and environments.
Theory of change: get them to generalise our values roughly correctly and OOD. Also ‘let’s make it an industry standard for AI systems to “become conservative and ask for guidance when facing ambiguity”, and gradually improve the standard from there as we figure out more alignment stuff.’ – Bensinger’s gloss.
One-sentence summary: avoid Goodharting by getting AI to satisfice rather than maximise.
Theory of change: if we fail to exactly nail down the preferences for a superintelligent agent we die to Goodharting → shift from maximising to satisficing in the agent’s utility function → we get a nonzero share of the lightcone as opposed to zero; also, moonshot at this being the recipe for fully aligned AI.
One-sentence summary: researching scalable methods of tracking behavioural drift in language models and benchmarks for evaluating a language model’s capacity for stable self-modification via self-training.
Theory of change: early models train ~only on human data while later models also train on early model outputs, which leads to early model problems cascading; left unchecked this will likely cause problems, so we need a better iterative improvement process.
Some names: Quintin Pope, Jacques Thibodeau, Owen Dudney, Roman Engeler
One-sentence summary: Train human-plus-LLM alignment researchers: with humans in the loop and without outsourcing to autonomous agents. More than that, an active attitude towards risk assessment of AI-based AI alignment.
Theory of change: Cognitive prosthetics to amplify human capability and preserve values. More alignment research per year and dollar.
Some names: Janus, Kees Dupuis. See also this team doing similar things.
A pivot/spinoff of some sort happened. “most former Ought staff are working at the new organisation”, details unclear.
One-sentence summary: “a) improved reasoning of AI governance & alignment researchers, particularly on long-horizon tasks and (b) pushing supervision of process rather than outcomes, which reduces the optimisation pressure on imperfect proxy objectives leading to “safety by construction”.
Theory of change: “The two main impacts of Elicit on AI Safety are improving epistemics and pioneering process supervision.”
One-sentence summary: “make highly capable agents do what humans want, even when it is difficult for humans to know what that is”.
Theory of change: [“Give humans help in supervising strong agents”] + [“Align explanations with the true reasoning process of the agent”] + [“Red team models to exhibit failure modes that don’t occur in normal use”] are necessary but probably not sufficient for safe AGI.
Some names: Geoffrey Irving
Estimated # FTEs: ?
Some outputs in 2023: ?
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
One-sentence summary: debate 2.0. scalable oversight of truthfulness: is it possible to develop training methods that incentivize truthfulness even when humans are unable to directly judge the correctness of a model’s output? / scalable benchmarking how to measure (proxies for) speculative capabilities like situational awareness.
Theory of change: current methods like RLHF will falter as frontier AI tackles harder and harder questions → we need to build tools that help human overseers continue steering AI → let’s develop theory on what approaches might scale → let’s build the tools.
Some names: Samuel Bowman, Ethan Perez, Alex Lyzhov, David Rein, Jacob Pfau, Salsabila Mahdi, Julian Michael
One-sentence summary: try to formalise a more realistic agent, understand what it means for it to be aligned with us, translate between its ontology and ours, and produce desiderata for a training setup that points at coherent AGIs similar to our model of an aligned agent.
Theory of change: work out how to train an aligned AI by first fixing formal epistemology.
One-sentence summary: Get AI to build a detailed world simulation which humans understand, elicit preferences over future states from humans, formally verify that the AI adheres to coarse preferences; plan using this world model and preferences. See also Provably safe systems (which I hope merges with it); see also APTAMI.
Theory of change: ontology specification, unprecedented formalisation of physical situations, unprecedented formal verification of high-dimensional state-action sequences. Stuart Russell’s Revenge. Notable for not requiring that we solve ELK; does require that we solve ontology though.
Some names: Davidad, Evan Miyazono, Daniel Windham. See also: Cannell.
One-sentence summary: formally model the behavior of physical/social systems, define precise “guardrails” that constrain what actions can occur, require AIs to provide safety proofs for their recommended actions, automatically validate these proofs. Closely related to OAA.
Theory of change: make a formal verification system that can act as an intermediary between a human user and a potentially dangerous system and only let provably safe actions through.
Some names: Steve Omohundro, Max Tegmark
Estimated # FTEs: 1??
Some outputs in 2023: plan announcement. Omohundro’s org are quite enigmatic.
One-sentence summary: restrict the design space to (partial) emulations of human reasoning. If the AI uses similar heuristics to us, it should default to not being extreme.
Theory of change: train a bounded tool AI which will help us against AGI without being very dangerous and will make banning unbounded AIs more politically feasible.
One-sentence summary: Get the thing to work out its own objective function (a la HCH).
Theory of change: “The aligned goal should be made of fully formalized math, not of human concepts that an AI has to interpret in its ontology, because ontologies break and reshape as the AI learns and changes. [..] a computationally unbounded mathematical oracle being given that goal would take desirable actions; and then, we should design a computationally bounded AI which is good enough to take satisfactory actions.”
One-sentence intro: using causal models to understand agents and so design environments with no incentive for defection.
Theory of change: Path-specific objectives avoid stringent demands on value specification, bottleneck is instead ensuring stability (how prone to unintentional side-effects a state is).
Some names: Tom Everitt, Lewis Hammond, Francis Rhys Ward, Ryan Carey, Sebastian Farquhar
One-sentence summary: Develop formal models of subagents and superagents, use the model to specify desirable properties of whole-part relations (e.g. how to prevent human-friendly parts getting wiped out). Currently using active inference as inspiration for the formalism. Study human and societal preferences and cognition; make a game-theoretic extension of active inference.
Theory of change: Solve self-unalignment, prevent procrustean alignment, allow for scalable noncoercion.
One-sentence summary: ‘started off as “characterize the sharp left turn” and evolved into getting fundamental insights about idealized forms of consequentialist cognition’.
Theory of change: understand general properties of consequentialist agents → figure out which subproblem is likely to actually help → formalise the relevant insights → fewer ways to die to AI.
One-sentence summary: model the internal components of agents, use humans as a model organism of AGI (humans seem made up of shards and so might AI).
Theory of change: “If policies are controlled by an ensemble of influences (“shards”), consider which training approaches increase the chance that human-friendly shards substantially influence that ensemble.”
See also Activation Engineering.
Some names: Quintin Pope, Alex Turner
Estimated # FTEs: 4
Some outputs in 2023: really solid empirical stuff in control / interventional interpretability
One-sentence summary: “how incentives for performative prediction can be eliminated through the joint evaluation of multiple predictors”.
Theory of change: “If powerful AI systems develop the goal of maximising predictive accuracy, either incidentally or by design, then this incentive for manipulation could prove catastrophic” → notice when it’s happening → design models that don’t do that.
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $33,200
Understanding optimisation
One-sentence summary: what is “optimisation power” (formalised), how do we build tools that track it, how relevant is any of this anyway. See also developmental interpretability?
Theory of change: existing theories are either rigorous OR good at capturing what we mean; let’s find one that is both → use the concept to build a better understanding of how and when an AI might get more optimisation power. Would be nice if we could detect or rule out speculative stuff like gradient hacking too.
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Corrigibility
(Figuring out how we get superintelligent agents to keep listening to us. Arguably scalable oversight and superalignment are ~atheoretical approaches to this.)
One-sentence summary: predict properties of AGI (e.g. powerseeking) with formal models. Corrigibility as the opposite of powerseeking.
Theory of change: figure out hypotheses about properties powerful agents will have → attempt to rigorously prove under what conditions the hypotheses hold, test them when feasible.
Some names: Marcus Hutter, Michael Cohen (1, 2), Michael Osborne
(Figuring out how superintelligent agents think about the world and how we get superintelligent agents to actually tell us what they know. Much of interpretability is incidentally aiming at this.)
One-sentence summary: train an AI that we can extract the latent, and seeming, and encrypted knowledge of, even when it has incentives to hide it. ELK, formalising heuristics, mechanistic anomaly detection
Theory of change: formalise notions of models having access to some bit(s) of information → design training objectives that incentivize systems to honestly report their internal beliefs
Some names: Paul Christiano, Mark Xu.
Some outputs in 2023: Nothing public; ‘we’re trying to develop a framework for “formal heuristic arguments” that can be used to reason about the behavior of neural networks.’
One-sentence summary: check the hypothesis that our universe abstracts well and many cognitive systems learn to use similar abstractions.
Theory of change: build tools to check the hypothesis, run the experiments, if the hypothesis holds we don’t need to worry about finicky parts of alignment like whether an AGI will know what we mean by love.
One-sentence summary: make sure advanced AI uses what we regard as proper game theory.
Theory of change: (1) keep the pre-superintelligence world sane by making AIs more cooperative; (2) remain integrated in the academic world, collaborate with academics on various topics and encourage their collaboration on x-risk; (3) hope our work on “game theory for AIs”, which emphasises cooperation and benefit to humans, has framing & founder effects on the new academic field.
One-sentence summary: theory generation, threat modelling, and toy methods to help with those. “Our main threat model is basically a combination of specification gaming and goal misgeneralisation leading to misaligned power-seeking.” See announcement post for full picture.
Theory of change: direct the training process towards aligned AI and away from misaligned AI: build enabling tech to ease/enable alignment work → apply said tech to correct missteps in training non-superintelligent agents → keep an eye on it as capabilities scale to ensure the alignment tech continues to work.
See also (in this document): Process-based supervision, Red-teaming, Capability evaluations, Mechanistic interpretability, Goal misgeneralisation, Causal alignment/incentives
Some names: Rohin Shah, Vika Krakovna, Janos Kramar, Neel Nanda
One-sentence summary: conceptual work (currently on deceptive alignment), auditing, and model evaluations; conceptual? Also a non-public interp agenda and deception evals in major labs.
Theory of change: “Conduct foundational research in interpretability and behavioral model evaluations, audit real-world models for deceptive alignment, support policymakers with our technical expertise where needed.”
Some names: Marius Hobbhahn, Lee Sharkey, Lucius Bushnaq et al
One-sentence summary: remain ahead of the capabilities curve/maintain ability to figure out what’s up with state of the art models, keep an updated risk profile, propagate flaws to relevant parties as they are discovered.
One-sentence summary: a science of robustness / fault tolerant alignment is their stated aim, but they do lots of interpretability papers and other things.
Theory of change: make AI systems less exploitable and so prevent one obvious failure mode of helper AIs / superalignment / oversight: attacks on what is supposed to prevent attacks. In general, work on overlooked safety research others don’t do for structural reasons: too big for academia or independents, but not totally aligned with the interests of the labs (e.g. prototyping moonshots, embarrassing issues with frontier models).
Some names: Adam Gleave, Ben Goldhaber, Adrià Garriga-Alonso, Daniel Pandori
One-sentence summary: Hard to classify. How to apply AI to enhancing human agency, individual agency and collective agency? what goals should scalable delegated intelligence be aligned to? how to use AI to improve the accountability of institutions?
Theory of change: Think about how regulation “aligns” corporations, and insights about how to safely integrate AI into society will come, as will insights into technical alignment questions. Develop socially beneficial AI now and it will improve chances of AI being beneficial in the long run, including by paths we haven’t even thought of yet.
Some names: Deger Turan, Matija Franklin, Peli Grietzer, Tushant Jha
Funded by: Future of Life Institute, Plurality Institute, Survival and Flourishing Project, private individuals
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: in flux, low millions.
More meta
We don’t distinguish between massive labs, individual researchers, and sparsely connected networks of people working on similar stuff. The funding amounts and full time employee estimates might be a reasonable proxy.
The categories we chose have substantial overlap and see the “see also”s for closely related work.
I wanted this to be a straight technical alignment doc, but people pointed out that would exclude most work (e.g. evals and nonambitious interpretability, which are safety but not alignment) so I made it a technical AGI safety doc. Plus ça change.
The only selection criterion is “I’ve heard of it and >= 1 person was recently working on it”. I don’t go to parties so it’s probably a couple months behind.
Obviously this is the Year of Governance and Advocacy, but I exclude all this good work: by its nature it gets attention. I also haven’t sought out the notable amount byordinarylabsandacademicswhodon’tframe their work as alignment. Nor the secret work.
You are unlikely to like my partition into subfields; here are others.
No one has read all of this material, including us. Entries are based on public docs or private correspondence where possible but the post probably still contains >10 inaccurate claims. Shouting at us is encouraged. If I’ve missed you (or missed the point), please draw attention to yourself.
If you enjoyed reading this, consider donating to Lightspeed, MATS, Manifund, or LTFF: some good work is bottlenecked by money, and some people specialise in giving away money to enable it.
Conflicts of interest: I wrote the whole thing without funding. I often work with ACS and PIBBSS and have worked with Team Shard. Lightspeed gave a nice open-ended grant to my org, Arb. CHAI once bought me a burrito.
If you’re interested in doing or funding this sort of thing, get in touch at hi@arbresearch.com. I never thought I’d end up as a journalist, but stranger things will happen.
Thanks to Alex Turner, Neel Nanda, Jan Kulveit, Adam Gleave, Alexander Gietelink Oldenziel, Marius Hobbhahn, Lauro Langosco, Steve Byrnes, Henry Sleight, Raymond Douglas, Robert Kirk, Yudhister Kumar, Quratulain Zainab, Tomáš Gavenčiak, Joel Becker, Lucy Farnik, Oliver Hayman, Sammy Martin, Jess Rumbelow, Jean-Stanislas Denain, Ulisse Mini, David Mathers, Chris Lakin, Vojta Kovařík, Zach Stein-Perlman, and Linda Linsefors for helpful comments.
Lots of agendas but not clear if anyone besides Byrnes and Thiergart are actively turning the crank. Seems like it would need a billion dollars.
Human enhancement
One-sentence summary: maybe we can give people new sensory modalities, or much higher bandwidth for conceptual information, or much better idea generation, or direct interface with DL systems, or direct interface with sensors, or transfer learning, and maybe this would help. The old superbaby dream goes here I suppose.
Theory of change: maybe this makes us better at alignment research
Merging
One-sentence summary: maybe we can form networked societies of DL systems and brains
Theory of change: maybe this lets us preserve some human values through bargaining or voting or weird politics.
One-sentence summary: maybe we can get really high-quality alignment labels from brain data, maybe we can steer models by training humans to do activation engineering fast and intuitively, maybe we can crack the true human reward function / social instincts and maybe adapt some of them for AGI.
Theory of change: as you’d guess
Some names: Byrnes, Cvitkovic, Foresight’s BCI,Also (list from Byrnes): Eli Sennesh, Adam Safron, Seth Herd, Nathan Helm-Burger, Jon Garcia, Patrick Butlin
Appendix: Research support orgs
One slightly confusing class of org is described by the sample {CAIF, FLI}. Often run by active researchers with serious alignment experience, but usually not following an obvious agenda, delegating a basket of strategies to grantees, doing field-building stuff like NeurIPS workshops and summer schools.
One-sentence summary: support researchers making differential progress in cooperative AI (eg precommitment mechanisms that can’t be used to make threats)
ARAAC—Make RL agents that can justify themselves (or other agents with their code as input) in human language at various levels of abstraction (AI apology, explainable RL for broad XAI)
CAIS: Machine ethics—Model intrinsic goods and normative factors, vs mere task preferences, as these will be relevant even under extreme world changes; helps us avoid proxy misspecification as well as value lock-in. Present state of research unknown (some relevant bits in RepEng).
CAIS: Power Aversion—incentives to models to avoid gaining more power than necessary. Related to mild optimisation and powerseeking. Present state of research unknown.
Unless you zoom out so far that you reach vague stuff like “ontology identification”. We will see if this total turnover is true again in 2028; I suspect a couple will still be around, this time.
> one can posit neural network interpretability as the GiveDirectly of AI alignment: reasonably tractable, likely helpful in a large class of scenarios, with basically unlimited scaling and only slowly diminishing returns. And just as any new EA cause area must pass the first test of being more promising than GiveDirectly, so every alignment approach could be viewed as a competitor to interpretability work. – Niplav
Shallow review of live agendas in alignment & safety
Summary
You can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on.
This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been working on this”, “I wonder roughly how many people are working on that thing”.
This doc is unreadably long, so that it can be Ctrl-F-ed. Also this way you can fork the list and make a smaller one.
Our taxonomy:
Understand existing models (evals, interpretability, science of DL)
Control the thing (prevent deception, model edits, value learning, goal robustness)
Make AI solve it (scalable oversight, cyborgism, etc)
Theory (galaxy-brained end-to-end, agency, corrigibility, ontology, cooperation)
Please point out if we mistakenly round one thing off to another, miscategorise someone, or otherwise state or imply falsehoods. We will edit.
Unlike the late Larks reviews, we’re not primarily aiming to direct donations. But if you enjoy reading this, consider donating to Manifund, MATS, or LTFF, or to Lightspeed for big ticket amounts: some good work is bottlenecked by money, and you have free access to the service of specialists in giving money for good work.
Meta
When I (Gavin) got into alignment (actually it was still ‘AGI Safety’) people warned me it was pre-paradigmatic. They were right: in the intervening 5 years, the live agendas have changed completely.[1] So here’s an update.
Chekhov’s evaluation: I include Yudkowsky’s operational criteria (Trustworthy command?, closure?, opsec?, commitment to the common good?, alignment mindset?) but don’t score them myself. The point is not to throw shade but to remind you that we often know little about each other.
See you in 5 years.
Editorial
Alignment is now famous enough that Barack Obama is sort of talking about it. This will attract climbers, grifters, goodharters and those simply misusing the word because it’s objectively confusing and attracts money and goodwill. We already had to half-abandon “AI safety” because of motivated semantic creep.
Low confidence: Mech interp probably has its share of people by now (though I accept that it is an excellent legible [ha] on-ramp and there’s lots of pre-chewed projects ready to go).
MATS works well (on average, with high variance). The London extension is a very good idea. They just got $185k from SFF but are still constrained.
Not including governance work leaves out lots of cool “technical policy”: forecasting, compute monitoring, trustless model verification, safety cases.
Whole new types of people are contributing, which is nice. I have in mind PIBBSS and the CAIS philosophers and the SLT mob and Eleuther’s discordant energy.
The big labs seem to be betting the farm on scalable oversight. This relies on no huge capabilities spikes and no irreversible misgeneralisation.
The de facto agenda of the uncoordinated and only-partially paradigmatic field is process-based supervision / defence in depth / hodgepodge / endgame safety / Shlegeris v1. We will throw together a dozen things which work in sub-AGIs and hope: RLHF/DPO + mass activation patching + scoping models down + boxing + dubiously scalable oversight + myopic training + data curation + passable automated alignment research (proof assistants) + … We will also slow things down by creating a (hackable, itself slow OODA) safety culture. Who knows.
Agendas
1. Understand existing models
characterisation
Evals
(Figuring out how a trained model behaves.)
Various capability evaluations
One-sentence summary: make tools that can actually check whether a model has a certain capability / misalignment mode. We default to low-n sampling of a vast latent space but aim to do better.
Theory of change: most models have a capabilities overhang when first trained and released; we should keep a close eye on what capabilities are acquired when so that frontier model developers are better informed on what security measures are already necessary (and hopefully they extrapolate and eventually panic).
Grouping together ARC Evals, Deepmind, Cavendish, situational awareness crew, Evans and Ward, Apollo. See also Model Psychology; neuroscience : psychology :: interpretability : model psychology. See also alignment evaluations. See also capability prediction and the hundreds of trolls doing ahem decentralised evals.
Some names: Mary Phuong, Toby Shevlane, Beth Barnes, Holden Karnofsky, Lawrence Chan, Owain Evans, Francis Rhys Ward, Apollo, Palisade, OAI Preparedness
Estimated # FTEs: 13 (ARC), ~50 elsewhere
Some outputs in 2023: AI AI research, autonomy, Do the Rewards Justify the Means?, Stubbornness. Naming the thing that GPTs have become was useful. Tag.
Critiques: Hubinger, Hubinger, Shovelain & Mckernon
Funded by: Various.
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~~$20,000,000 not counting the new government efforts
Various red-teams
One-sentence summary: let’s attack current models and see what they do / deliberately induce bad things on current frontier models to test out our theories / methods. See also gain of function experiments (producing demos and toy models of misalignment. See also “Models providing Critiques”. See also: threat modelling (Model Organisms, Powerseeking, Apollo); steganography; part of OpenAI’s superalignment schedule; Trojans (CAIS); Latent Adversarial Training is an unusual example.
Some names: Stephen Casper, Lauro Langosco, Jacob Steinhardt, Nina Rimsky, Jeffrey Ladish/Palisade, Ethan Perez, Geoffrey Irving, ARC Evals, Apollo, Dylan Hadfield-Menell/AAG
Estimated # FTEs: ?
Some outputs in 2023: Rimsky, Wang, Wei, Tong, Casper, Ladish, Langosco, Shah, Scheurer, AAG, 2022: Irving
Critiques: ?
Funded by: Various
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: Large
Eliciting model anomalies
One-sentence summary: finding weird features of current models, in a way which isn’t fishing for capabilities nor exactly red-teaming. Think inverse Scaling, SolidGoldMagikarp, Reversal curse, out of context. Not an agenda but a multiplier on others.
Theory of change: maybe anomalies and edge cases tell us something deep about the models; you need data to theorise.
Alignment of Complex Systems: LLM interactions
One-sentence summary: understand LLM interactions, their limits, and work up from empirical work towards more general hypotheses about complex systems of LLMs, such as network effects in hybrid systems and scaffolded models.
Theory of change: Aggregates are sometimes easier to predict / theorise than individuals: the details average out. So experiment with LLM interactions (manipulation, conflict resolution, systemic biases etc). Direct research towards LLM interactions in future large systems (in contrast to the current singleton focus); prevent systemic bad design and inform future models.
Some names: Jan Kulveit, Tomáš Gavenčiak, Ada Böhm
Estimated # FTEs: 4
Some outputs in 2023: software and insights into LLMs
Critiques: Yudkowsky on the interfaces idea
Funded by: SFF
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$300,000
The other evals (groundwork for regulation)
Much of Evals and Governance orgs’ work is something different: developing politically legible metrics, processes / shocking case studies. The aim is to motivate and underpin actually sensible regulation.
But this is a technical alignment post. I include this section to emphasise that these other evals (which seek confirmation) are different from understanding whether dangerous capabilities have or might emerge.
Interpretability
(Figuring out what a trained model is actually computing.)[2]
Ambitious mech interp
In the sense of complete bottom-up circuit-level reconstruction of learned algorithms.
One-sentence summary: find circuits for everything automatically, then figure out if the model will do bad things (which algorithm implementing which plan; a full causal graph with a sensible number of nodes); any model that will do bad things can then be deleted or edited.
Theory of change: aid alignment through ontology identification, auditing for deception and planning, force-multiplier for alignment research, intervening to make training safer, inference-time controls to act on hypothetical real-time monitoring. Iterate towards things which don’t scheme. See also scalable oversight.
Some names: Chris Olah, Lee Sharkey, Neel Nanda, Steven Bills, Nick Cammarata, Leo Gao, William Saunders, Apollo (private work)
Estimated # FTEs: 80? (Anthropic, Apollo, DeepMind, OpenAI, various smaller orgs)
Some outputs in 2023: monosemantic features, the linear representations hypothesis seems well on the way to being confirmed. Jenner, Universality, Lieberum, distilled rep...
Automated: ACDC, Larger models reading smaller models, sparse models of larger models
Manual reverse engineering. [EDIT: alive and well: McDougall et al, Tigges et al, Quirke et al]
Critiques: Summarised here: Charbel, Bushnaq, Casper, Shovelain & Mckernon, RicG, Kross, Hobbhahn.
Funded by: Various (Anthropic, Deepmind, OpenAI, MATS)
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: many millions
Concept-based interp
One-sentence summary: if ground-up understanding of models turns out to be too hard/impossible, we might still be able to jump in at some high level of abstraction and still steer away from misaligned AGI. AKA “high-level interpretability”.
Theory of change: build tools that can output a probable and predictive representation of internal objectives or capabilities of a model, thereby solving inner alignment.
Some names: Erik Jenner, Jessica Rumbelow, Stephen Casper, Arun Jose, Paul Colognese
Estimated # FTEs: ?
Some outputs in 2023: High-level Interpretability, Internal Target Information for AI Oversight
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Causal abstractions
One-sentence summary: Wentworth 2020; partially describe the “algorithm” a neural network or other computation is using, while throwing away irrelevant details.
Theory of change: find all possible abstractions of a given computation → translate them into human-readable language → identify useful ones like deception → intervene when a model is doing it. Also develop theory for interp more broadly as a multiplier; more mathematical (hopefully, more generalizable) analysis.
Some names: Eric Jenner, Atticus Geiger
Estimated # FTEs: ?
Some outputs in 2023: Jenner’s agenda, Causal Abstraction for Faithful Model Interpretation
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
EleutherAI interp
One-sentence summary: tools to investigate questions like path dependence of training.
Theory of change: make amazing tools to push forward the frontier of interpretability.
Some names: Stella Biderman, Nora Belrose, AI_WAIFU, Shivanshu Purohit
Estimated # FTEs: ~12 plus ~~50 part-time volunteers
Some outputs in 2023: LEACE (see also Surgical Model Edits); Tuned Lens; Improvements on CCS: VINC; ELK generalisation
Critiques:
Funded by: Hugging Face, Stability AI, Nat Friedman, Lambda Labs, Canva, CoreWeave
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $2,000,000? (guess)
Activation engineering (as unsupervised interp)
One-sentence summary: intervene on model representations and so get good causal evidence when dishonesty, powerseeking, and other intrinsic risks show up; also test interpretability theories and editing theories. See also the section of the same name under “Model edits” below.
Theory of change: test interpretability theories as part of that theory of change; find new insights from interpretable causal interventions on representations. Unsupervised means no annotation bias, which lowers one barrier to extracting superhuman representations.
Some names: Alex Turner, Collin Burns, Andy Zou, Kaarel Hänni, Walter Laurito, Cadenza (manifund)
Estimated # FTEs: ~15
Some outputs in 2023: famously CCS last year, steering vectors in RL and GPTs, Cadenza RFC, the shape of concepts. Roger experiment. See also representation engineering.
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Leap
One-sentence summary: research startup selling an interpretability API (model-agnostic feature viz of vision models). Aiming for data-independent (“want to extract information directly from the model with little dependence on training or test data”) and global (“mech interp isn’t going to be enough, we need holistic methods that capture gestalt”) interpretability methods.
Theory of change: make safety tools people want to use, stress-test methods in real life, develop a strong alternative to bottom-up circuit analysis.
Some names: Jessica Rumbelow
Estimated # FTEs: 5
Some outputs in 2023: Prototype generation
Critiques: ?
Funded by: private investors
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: millions on the way
Understand learning
(Figuring out how the model figured it out.)
Timaeus: Developmental interpretability & singular learning theory
One-sentence summary: Build tools for detecting, locating, and interpreting phase transitions that govern training and in-context learning in models, inspired by concepts in singular learning theory (SLT), statistical physics, and developmental biology.
Theory of change: When structure forms in neural networks, it can leave legible developmental traces that we can interpret to figure out where and how that structure is implemented. This paves a way to scalable, automated interpretability. In particular, it may be hopeless to intervene at the end of the learning process, so we want to catch and prevent deceptiveness and other dangerous capabilities and values as early as possible.
Some names: Jesse Hoogland, Alexander Gietelink Oldenziel, Daniel Murfet, Stan van Wingerden
Estimated # FTEs: 10
Some outputs in 2023: Dynamical phase transitions, degeneracy in singular models; see also Eleuther’s Pythia
Critiques: self, Ege, Skalse
Funded by: Manifund
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $145,000
Various other efforts:
Grokking (Lieberum & Nanda, Shah followup). Critique by Pope.
Old: Science of DL agenda
Anthropic: tracing outputs to training data
Scaling training process transparency (Krzyzanowski)
Out of context learning interpretability (Levoso)
Algorithm Distillation Interpretability (Levoso)
2. Control the thing
(Figuring out how to predictably affect model behaviour.)
Prosaic alignment / alignment by default
One-sentence summary: nudging base models by optimising their output. (RLHF, Constitutional, DPO, SFT, HHH, RLAIF.) Not really an agenda, but part of agendas, like Korbak or Lindner, or Redwood’s defunct harmlessness finetunes, or the Karnofsky plan. I like the name “blind output alignment” for this but “prosaic alignment” is well-established.
Imputed assumptions: things are generally smooth, relevant capabilities are harder than alignment, assume no mesaoptimisers, that zero-shot deception is hard, assume a humanish ontology is learned, assume no simulated agents, assume that noise in the data means that human preferences are not ruled out. Assume that alignment is a superficial feature. Maybe assume that thoughts are translucent.
Theory of change (steelman): we can observe some forms of overoptimisation and correct for them, which might reduce misgeneralisation risk, maybe a lot.
Some names: Anthropic, OpenAI, Scale, Hugging Face H4, Eleuther, LeCun. Nora Belrose and Quintin Pope state a related view well here. JD Pressman is indefatigable.
Estimated # FTEs: ~~1,000
Some outputs in 2023: CCAI. DPO seems to be the big advance.
Critique: hoo boy. Just the very most recent: Open Problems with RLHF, neo-Arrow, Jain & Kirk.
Funded by: big tech, VCs
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: some large fraction of the AI industry.
Redwood: control evaluations
One-sentence summary: evaluate the humans/org instead: assess the risk of a particular protocol for deploying your model in a particular scenario, by silently swapping in an intentionally misaligned model. Play out the scenario in sim until 1) a catastrophe occurs, 2) your protocol confidently spots the red team model, or 3) the scenario ends without catastrophe. Also do adversarial evaluation (monitoring untrusted model outputs with both trusted and untrusted models). Predecessor.
Theory of change: prevent high-stakes failures by automating risk assessment of eval and deployment plans.
Some names: Buck Shlegeris, Ryan Greenblatt, Kshitij Sachan
Estimated # FTEs: 10?
Some outputs in 2023: big post, another in the works.
Critiques: of org in general
Funded by: OpenPhil
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $5,300,000 (whole org last year)
Conceived as the same initiative as:
Safety scaffolds
One-sentence summary: Soft boxing. Just as people are wrapping LLMs in tooling to boost their capabilities, so can we put up security layers: detectors and classifiers and censors and anomaly detectors and debate partners and “trusted dumb agents” and so on. See also process-based supervision.
Theory of change: beating every scaffold is conjunctive (and some of the scaffolds are fairly smart), so takeover attempts are more likely to be caught.
Some names: Buck Shlegeris, Fabien Roger. (Lots of people are doing this de facto but Redwood are the ones reifying it in public.) See also OAI Preparedness.
Estimated # FTEs: ?
Some outputs in 2023: first principles, nice diagram, defenses against encoded reasoning, coup probes. See also Herd.
Critique: I mean kinda these.
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
h/t Zach Stein-Perlman for resolving this.
Prevent deception
Through methods besides mechanistic interpretability.
Redwood: mechanistic anomaly detection
One-sentence summary: measurement tampering is where the AI system manipulates multiple measurements to create the illusion of good results instead of achieving the desired outcome.
Theory of change: find out when measurement tampering occurs → build models that don’t do that.
See also CAIS
Some names: Fabien Roger, Ryan Greenblatt, Max Nadeau, Buck Shlegeris, Nate Thomas
Estimated # FTEs: 2.5?
Some outputs in 2023: measurement tampering, password-locked, coup probes
Critiques: general, of the org, critique of past agenda
Funded by: OpenPhil
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $5,300,000 (whole org, last year)
Indirect deception monitoring
One-sentence summary: build tools to find whether a model will misbehave in high stakes circumstances by looking at it in testable circumstances. This bucket catches work on lie classifiers, sycophancy, Scaling Trends For Deception.
Theory of change: maybe we can catch a misaligned model by observing dozens of superficially unrelated parts, or tricking it into self-reporting, or by building the equivalent of brain scans.
Some names: Dan Hendrycks, Owain Evans, Jan Brauner, Sören Mindermann. See also Apollo, CAIS, CAIF, and the two activation engineering sections in this post.
Estimated # FTEs: 20?
Some outputs in 2023: AI deception survey, Natural selection, Overview, RepEng again
Critique (of related ideas): 1%
Funded by: OpenPhil
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $10,618,729 (CAIS, whole org)
Anthropic: externalised reasoning oversight
One-sentence summary: Train models that print their actual reasoning in English (or another language we can read) every time. Give negative reward for dangerous-seeming reasoning, or just get rid of models that engage in it.
Theory of change: “Force a language model to think out loud, and use the reasoning itself as a channel for oversight. If this agenda is successful, it could defeat deception, power-seeking, and other disapproved reasoning.”
See also sycophancy.
Some names: Tamera Lanham, Ansh Radhakrishnan
Estimated # FTEs: ?
Some outputs in 2023: CoT faithfulness, Question decomposition faithfulness
Critiques: Samin
Funded by: Anthropic investors
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: large
Surgical model edits
(interventions on model internals)
Weight editing
One-sentence summary: targeted finetuning aimed at changing single fact representations and maybe higher-level stuff
Theory of change: the other half of mech interp; one family of methods to delete bad things, how to add good things.
Some outputs in 2023: multi-objective weight masking. See also concept erasure.
Critiques: of ROME
Activation engineering
One-sentence summary: let’s see if we can programmatically modify activations to steer outputs towards what we want, in a way that generalises across models and topics. As much or more an intervention-based approach to interpretability than about control (see above).
Theory of change: maybe simple things help: let’s build more stuff to stack on top of finetuning. Activations are the last step before output and so interventions on them are less pre-emptable. Slightly encourage the model to be nice, add one more layer of defence to our bundle of partial alignment methods.
Some names: Alex Turner, Andy Zou, Nina Rimsky, Claudia Shi, Léo Dana, Ole Jorgensen. See also Li and Bau Lab.
Estimated # FTEs: ~20
Some outputs in 2023: famously CCS last year, steering vectors in RL and GPTs, Cadenza RFC, the shape of concepts. Roger experiment. See also representation engineering
Critiques: of ROME
Funded by: Deepmind? Anthropic? MATS?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Getting it to learn what we want
(Figuring out how to control what the model figures out.)
Social-instinct AGI
One-sentence summary: Social and moral instincts are (partly) implemented in particular hardwired brain circuitry; let’s figure out what those circuits are and how they work; this will involve symbol grounding. Newest iteration of a sustained and novel agenda.
Theory of change: Fairly direct alignment via changing training to reflect actual human reward. Get actual data about (reward, training data) → (human values) to help with theorising this map in AIs; “understand human social instincts, and then maybe adapt some aspects of those for AGIs, presumably in conjunction with other non-biological ingredients”.
Some names: Steve Byrnes
Estimated # FTEs: 1
Some outputs in 2023:
Critiques: ?
Funded by: Astera,
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Imitation learning
One-sentence summary: train models on human behaviour (such as monitoring which keys a human presses when in response to what happens on a computer screen); contrast with Strouse.
Theory of change: humans learn well by observing each other → let’s test whether AIs can learn by observing us → outer alignment and moonshot at safe AGI.
If you squint, this is what ‘alignment by default’ is doing, in the form of a self-supervised learning imitating the human web corpus. But the proposed algorithms in imitation learning proper are very different and more obviously Bayesian.
Some names: Jérémy Scheurer, Tomek Korbak, Ethan Perez
Estimated # FTEs: ?
Some outputs in 2023: Imitation Learning from Language Feedback, survey, nice theory from 2022
Critiques: many
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Reward learning
One-sentence summary: People like CHAI are still looking at reward learning to “reorient the general thrust of AI research towards provably beneficial systems”. (They are also doing a lot of advocacy, like everyone else.)
Theory of change: understand what kinds of things can go wrong when humans are directly involved in training a model → build tools that make it easier for a model to learn what humans want it to learn.
See also RLHF and recursive reward modelling, the industrialised forms.
Some names: CHAI among others
Estimated # FTEs: ?
Some outputs in 2023: Multiple teachers, Minimal knowledge, etc, Causal confusion
Critiques: nice summary of historical problem statements
Funded by: mostly OpenPhil
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $12,222,246 (CHAI, whole org, 2021, not counting the UC Berkeley admin tax)
Goal robustness
(Figuring out how to make the model keep doing ~what it has been doing so far.)
Measuring OOD
One-sentence summary: let’s build models that can recognize when they are out of distribution (or at least give us tools to notice when they are). See also anomaly detection.
Theory of change: bad things happen if powerful AI “learns the wrong lesson” from training data, we should make it not do that.
Some names: Steinhardt, Tegan Maharaj, Irina Rish. Maybe this.
Estimated # FTEs: ?
Some outputs in 2023: Alignment from a DL perspective, Goal Misgeneralization, CoinRun: Solving Goal Misgeneralisation, Modeling ambiguity.
Critiques: ?
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Concept extrapolation
One-sentence summary: continual learning to make model internals more stable as they learn / as the world changes; safely extending features an agent has learned in training to new datasets and environments.
Theory of change: get them to generalise our values roughly correctly and OOD. Also ‘let’s make it an industry standard for AI systems to “become conservative and ask for guidance when facing ambiguity”, and gradually improve the standard from there as we figure out more alignment stuff.’ – Bensinger’s gloss.
Some names: Stuart Armstrong
Estimated # FTEs: 4?
Some outputs in 2023: good primer, solved a toy problem
Critiques: Soares
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Mild optimisation
One-sentence summary: avoid Goodharting by getting AI to satisfice rather than maximise.
Theory of change: if we fail to exactly nail down the preferences for a superintelligent agent we die to Goodharting → shift from maximising to satisficing in the agent’s utility function → we get a nonzero share of the lightcone as opposed to zero; also, moonshot at this being the recipe for fully aligned AI.
Some names:
Estimated # FTEs: 2?
Some outputs in 2023: Gillen, Soft-optimisation, Bayes and Goodhart
Critiques: Dearnaley?
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
3. Make AI solve it
(Figuring out how models might help figure it out.)
Scalable oversight
(Figuring out how to help humans supervise models. Hard to cleanly distinguish from ambitious mechanistic interpretability.)
OpenAI: Superalignment
One-sentence summary: be ready to align a human-level automated alignment researcher.
Theory of change: get it to help us with scalable oversight, Critiques, recursive reward modelling, and so solve inner alignment. See also seed.
Some names: Ilya Sutskever, Jan Leike, Leopold Aschenbrenner, Collin Burns
Estimated # FTEs: 30?
Some outputs in 2023: whole org
Critiques: Zvi, Christiano, MIRI, Steiner, Ladish, Wentworth, Gao lol
Funded by: Microsoft
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$100m of compute alone (20% of OpenAI’s secured compute)
Supervising AIs improving AIs
One-sentence summary: researching scalable methods of tracking behavioural drift in language models and benchmarks for evaluating a language model’s capacity for stable self-modification via self-training.
Theory of change: early models train ~only on human data while later models also train on early model outputs, which leads to early model problems cascading; left unchecked this will likely cause problems, so we need a better iterative improvement process.
Some names: Quintin Pope, Jacques Thibodeau, Owen Dudney, Roman Engeler
Estimated # FTEs: 2
Some outputs in 2023: ?
Critiques:
Funded by: LTFF, Lightspeed, OpenPhil, tiny Lightspeed grant,
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$100,000
Cyborgism
One-sentence summary: Train human-plus-LLM alignment researchers: with humans in the loop and without outsourcing to autonomous agents. More than that, an active attitude towards risk assessment of AI-based AI alignment.
Theory of change: Cognitive prosthetics to amplify human capability and preserve values. More alignment research per year and dollar.
Some names: Janus, Kees Dupuis. See also this team doing similar things.
Estimated # FTEs: 6?
Some outputs in 2023: agenda statement, role play
Critiques: self
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
See also Simboxing (Jacob Cannell).
Task decomp
Recursive reward modelling is supposedly not dead but instead one of the tools Superalignment will build.
Another line tries to make something honest out of chain of thought / tree of thought.
Elicit (previously Ought)
A pivot/spinoff of some sort happened. “most former Ought staff are working at the new organisation”, details unclear.
One-sentence summary: “a) improved reasoning of AI governance & alignment researchers, particularly on long-horizon tasks and (b) pushing supervision of process rather than outcomes, which reduces the optimisation pressure on imperfect proxy objectives leading to “safety by construction”.
Theory of change: “The two main impacts of Elicit on AI Safety are improving epistemics and pioneering process supervision.”
Some names: Charlie George, Andreas Stuhlmüller
Estimated # FTEs: ?
Some outputs in 2023: factored verification
Critiques:
Funded by: public benefit corporation
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $9,000,000
Adversarial
Deepmind Scalable Alignment
One-sentence summary: “make highly capable agents do what humans want, even when it is difficult for humans to know what that is”.
Theory of change: [“Give humans help in supervising strong agents”] + [“Align explanations with the true reasoning process of the agent”] + [“Red team models to exhibit failure modes that don’t occur in normal use”] are necessary but probably not sufficient for safe AGI.
Some names: Geoffrey Irving
Estimated # FTEs: ?
Some outputs in 2023: ?
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Anthropic / NYU Alignment Research Group / Perez collab
One-sentence summary: debate 2.0. scalable oversight of truthfulness: is it possible to develop training methods that incentivize truthfulness even when humans are unable to directly judge the correctness of a model’s output? / scalable benchmarking how to measure (proxies for) speculative capabilities like situational awareness.
Theory of change: current methods like RLHF will falter as frontier AI tackles harder and harder questions → we need to build tools that help human overseers continue steering AI → let’s develop theory on what approaches might scale → let’s build the tools.
Some names: Samuel Bowman, Ethan Perez, Alex Lyzhov, David Rein, Jacob Pfau, Salsabila Mahdi, Julian Michael
Estimated # FTEs: ?
Some outputs in 2023: Specific versus General Principles for Constitutional AI, Debate Helps Supervise Unreliable Experts, Language Models Don’t Always Say What They Think, full rundown
Critiques: obfuscation, local inadequacy?, it doesn’t work right now (2022)
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
See also FAR (below).
4. Theory
(Figuring out what we need to figure out, and then doing that. This used to be all we could do.)
Galaxy-brained end-to-end solutions
The Learning-Theoretic Agenda
One-sentence summary: try to formalise a more realistic agent, understand what it means for it to be aligned with us, translate between its ontology and ours, and produce desiderata for a training setup that points at coherent AGIs similar to our model of an aligned agent.
Theory of change: work out how to train an aligned AI by first fixing formal epistemology.
Some names: Vanessa Kosoy
Estimated # FTEs: 2
Some outputs in 2023: quantum??, mortal pop, a logic
Critiques: Matolcsi
Funded by: MIRI, MATS, Effective Ventures, Lightspeed
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Open Agency Architecture
One-sentence summary: Get AI to build a detailed world simulation which humans understand, elicit preferences over future states from humans, formally verify that the AI adheres to coarse preferences; plan using this world model and preferences. See also Provably safe systems (which I hope merges with it); see also APTAMI.
Theory of change: ontology specification, unprecedented formalisation of physical situations, unprecedented formal verification of high-dimensional state-action sequences. Stuart Russell’s Revenge. Notable for not requiring that we solve ELK; does require that we solve ontology though.
Some names: Davidad, Evan Miyazono, Daniel Windham. See also: Cannell.
Estimated # FTEs: 5?
Some outputs in 2023: Several teams working out the details a bit more
Critiques: Soares
Funded by: the estate of Peter Eckersley / Atlas Computing’s future funder / the might of the post-Cummings British state
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: Very roughly £50,000,000
Provably safe systems
One-sentence summary: formally model the behavior of physical/social systems, define precise “guardrails” that constrain what actions can occur, require AIs to provide safety proofs for their recommended actions, automatically validate these proofs. Closely related to OAA.
Theory of change: make a formal verification system that can act as an intermediary between a human user and a potentially dangerous system and only let provably safe actions through.
Some names: Steve Omohundro, Max Tegmark
Estimated # FTEs: 1??
Some outputs in 2023: plan announcement. Omohundro’s org are quite enigmatic.
Critiques: Zvi
Funded by: unknown
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Conjecture: Cognitive Emulation (CoEms)
One-sentence summary: restrict the design space to (partial) emulations of human reasoning. If the AI uses similar heuristics to us, it should default to not being extreme.
Theory of change: train a bounded tool AI which will help us against AGI without being very dangerous and will make banning unbounded AIs more politically feasible.
Some names: Connor Leahy, Gabriel Alfour?
Estimated # FTEs: 11?
Some outputs in 2023: ?
Critiques: Scher, Samin, org
Funded by: private investors (Plural Platform, Metaplanet, secret)
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: millions USD.
Question-answer counterfactual intervals (QACI)
One-sentence summary: Get the thing to work out its own objective function (a la HCH).
Theory of change: “The aligned goal should be made of fully formalized math, not of human concepts that an AI has to interpret in its ontology, because ontologies break and reshape as the AI learns and changes. [..] a computationally unbounded mathematical oracle being given that goal would take desirable actions; and then, we should design a computationally bounded AI which is good enough to take satisfactory actions.”
Some names: Tamsin Leake
Estimated # FTEs: 3?
Some outputs in 2023: see agenda post
Critiques: Hobson, Anom
Funded by: SFF
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $438,000+
Understanding agency
(Figuring out ‘what even is an agent’ and how it might be linked to causality.)
Causal foundations
One-sentence intro: using causal models to understand agents and so design environments with no incentive for defection.
Theory of change: Path-specific objectives avoid stringent demands on value specification, bottleneck is instead ensuring stability (how prone to unintentional side-effects a state is).
Some names: Tom Everitt, Lewis Hammond, Francis Rhys Ward, Ryan Carey, Sebastian Farquhar
Estimated # FTEs: 4-8
Some outputs in 2023: sequence, Defining Deception, unifying the big decision theories, first causal discovery algorithm for discovering agents
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: small fraction of Deepmind
Alignment of Complex Systems: Hierarchical agency
One-sentence summary: Develop formal models of subagents and superagents, use the model to specify desirable properties of whole-part relations (e.g. how to prevent human-friendly parts getting wiped out). Currently using active inference as inspiration for the formalism. Study human and societal preferences and cognition; make a game-theoretic extension of active inference.
Theory of change: Solve self-unalignment, prevent procrustean alignment, allow for scalable noncoercion.
Some names: Jan Kulveit, Tomáš Gavenčiak
Estimated # FTEs: 4
Some outputs in 2023: insights into LLMs, a deep dive into active inference.
Critiques: indirect
Funded by: SFF
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $425,000?
See also the “ecosystems of intelligence” collab involving Karl Friston and Beren Millidge among many others.
The ronin sharp left turn crew
One-sentence summary: ‘started off as “characterize the sharp left turn” and evolved into getting fundamental insights about idealized forms of consequentialist cognition’.
Theory of change: understand general properties of consequentialist agents → figure out which subproblem is likely to actually help → formalise the relevant insights → fewer ways to die to AI.
Some names: (Kwa, Barnett, Hebbar) in the past
Estimated # FTEs: 2?
Some outputs in 2023: postmortem
Critiques: Gabs, Soares, Pope, etc, tangentially EJT
Funded by: Lightspeed
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $269,200 (Hebbar)
Shard theory
One-sentence summary: model the internal components of agents, use humans as a model organism of AGI (humans seem made up of shards and so might AI).
Theory of change: “If policies are controlled by an ensemble of influences (“shards”), consider which training approaches increase the chance that human-friendly shards substantially influence that ensemble.”
See also Activation Engineering.
Some names: Quintin Pope, Alex Turner
Estimated # FTEs: 4
Some outputs in 2023: really solid empirical stuff in control / interventional interpretability
Critiques: Chan, Soares, Miller, Lang, Kwa
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
boundaries / membranes
One-sentence summary: Formalise one piece of morality: the causal separation between agents and their environment. See also Open Agency Architecture.
Theory of change: Formalise (part of) morality/safety, solve outer alignment.
Some names: Chris Lakin (full-time), Andrew Critch, Davidad
Estimated # FTEs: 1
Some outputs in 2023: problem statements, planning a workshop early 2024
Critiques:
Funded by: private donor & Foresight
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: <$100k
A disempowerment formalism
One-sentence summary: offer formal and operational notions of (dis)empowerment which are conceptually satisfactory and operationally implementable.
Theory of change: formalisms will be useful in the future.
Some names: Damiano Fornasiere, Pietro Greiner
Estimated # FTEs: 2
Some outputs in 2023: ?
Critiques:
Funded by: Manifund
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $60,300
Performative prediction
One-sentence summary: “how incentives for performative prediction can be eliminated through the joint evaluation of multiple predictors”.
Theory of change: “If powerful AI systems develop the goal of maximising predictive accuracy, either incidentally or by design, then this incentive for manipulation could prove catastrophic” → notice when it’s happening → design models that don’t do that.
Some names: Rubi Hudson
Estimated # FTEs: 1
Some outputs in 2023: Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies (a precursor)
Critiques: ?
Funded by: Manifund
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $33,200
Understanding optimisation
One-sentence summary: what is “optimisation power” (formalised), how do we build tools that track it, how relevant is any of this anyway. See also developmental interpretability?
Theory of change: existing theories are either rigorous OR good at capturing what we mean; let’s find one that is both → use the concept to build a better understanding of how and when an AI might get more optimisation power. Would be nice if we could detect or rule out speculative stuff like gradient hacking too.
Some names: Alex Altair, Jacob Hilton, Thomas Kwa
Estimated # FTEs: ?
Some outputs in 2023: Altair drafts (1, 2, 3, 4), How Many Bits Of Optimisation Can One Bit Of Observation Unlock? (Wentworth), but at what cost?, Towards Measures of Optimisation (MacDermott, Oldenziel); Goodharting: RL stuff, Krueger, overopt, catastrophic Goodhart.
Critiques: ?
Funded by: LTFF, OpenAI, ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Corrigibility
(Figuring out how we get superintelligent agents to keep listening to us. Arguably scalable oversight and superalignment are ~atheoretical approaches to this.)
Behavior alignment theory
One-sentence summary: predict properties of AGI (e.g. powerseeking) with formal models. Corrigibility as the opposite of powerseeking.
Theory of change: figure out hypotheses about properties powerful agents will have → attempt to rigorously prove under what conditions the hypotheses hold, test them when feasible.
Some names: Marcus Hutter, Michael Cohen (1, 2), Michael Osborne
Estimated # FTEs: 3
Some outputs in 2023: ?
Critiques: Carey & Everitt (against corrigibility)
Funded by: Deepmind
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
The comments in this thread are extremely good – but none of the authors are working on this!! See also Holtman’s neglected result. See also EJT (and formerly Petersen). See also Dupuis.
Ontology identification
(Figuring out how superintelligent agents think about the world and how we get superintelligent agents to actually tell us what they know. Much of interpretability is incidentally aiming at this.)
ARC Theory
One-sentence summary: train an AI that we can extract the latent, and seeming, and encrypted knowledge of, even when it has incentives to hide it. ELK, formalising heuristics, mechanistic anomaly detection
Theory of change: formalise notions of models having access to some bit(s) of information → design training objectives that incentivize systems to honestly report their internal beliefs
Some names: Paul Christiano, Mark Xu.
Some outputs in 2023: Nothing public; ‘we’re trying to develop a framework for “formal heuristic arguments” that can be used to reason about the behavior of neural networks.’
Critiques: clarification, alternative formulation
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Natural abstractions
One-sentence summary: check the hypothesis that our universe abstracts well and many cognitive systems learn to use similar abstractions.
Theory of change: build tools to check the hypothesis, run the experiments, if the hypothesis holds we don’t need to worry about finicky parts of alignment like whether an AGI will know what we mean by love.
Some names: John Wentworth
Estimated # FTEs: 2?
Some outputs in 2023: tag
Critiques: Summary and critique, Soares
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ?
Understand cooperation
(Figuring out how inter-AI and AI/human game theory should or would work.)
CLR
One-sentence summary: future agents intentionally creating s-risks is the worst of all possible problems, we should avoid that.
Theory of change: make present and future AIs inherently cooperative via improving theories of cooperation.
Some names: Jesse Clifton, Anni Leskelä, Julian Stastny
Estimated # FTEs: 15
Some outputs in 2023: open minded updatelessness, possibly this, spitefulness
Critiques:
Funded by: Ruairi Donnelly? Polaris Ventures?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: £3,375,081 income last year
FOCAL
One-sentence summary: make sure advanced AI uses what we regard as proper game theory.
Theory of change: (1) keep the pre-superintelligence world sane by making AIs more cooperative; (2) remain integrated in the academic world, collaborate with academics on various topics and encourage their collaboration on x-risk; (3) hope our work on “game theory for AIs”, which emphasises cooperation and benefit to humans, has framing & founder effects on the new academic field.
Some names: Vincent Conitzer, Caspar Oesterheld
Estimated # FTEs: 7
Some outputs in 2023: Bounded Inductive Rationality, Computational Complexity of Single-Player Imperfect-Recall Games, Game Theory with Simulation of Other Players
Critiques: Self-submitted: “our theory of change is not clearly relevant to superintelligent AI”
Funded by: Polaris Ventures
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: >$500,000
See also higher-order game theory. We moved CAIF to the “Research support” appendix. We moved AOI to “misc”.
5. Labs with miscellaneous efforts
(Making lots of bets rather than following one agenda, which is awkward for a topic taxonomy.)
Deepmind Alignment Team
One-sentence summary: theory generation, threat modelling, and toy methods to help with those. “Our main threat model is basically a combination of specification gaming and goal misgeneralisation leading to misaligned power-seeking.” See announcement post for full picture.
Theory of change: direct the training process towards aligned AI and away from misaligned AI: build enabling tech to ease/enable alignment work → apply said tech to correct missteps in training non-superintelligent agents → keep an eye on it as capabilities scale to ensure the alignment tech continues to work.
See also (in this document): Process-based supervision, Red-teaming, Capability evaluations, Mechanistic interpretability, Goal misgeneralisation, Causal alignment/incentives
Some names: Rohin Shah, Vika Krakovna, Janos Kramar, Neel Nanda
Estimated # FTEs: ~40
Some outputs in 2023: Tracr, Does Circuit Analysis Interpretability Scale?, The Hydra Effect; understanding / distilling threat models: “refining the sharp left turn” (2022) and “will capabilities generalise more” (2022); doubly-efficient debate (including a Lean proof)
Critiques: Zvi
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$10,000,000?
Apollo
One-sentence summary: conceptual work (currently on deceptive alignment), auditing, and model evaluations; conceptual? Also a non-public interp agenda and deception evals in major labs.
Theory of change: “Conduct foundational research in interpretability and behavioral model evaluations, audit real-world models for deceptive alignment, support policymakers with our technical expertise where needed.”
Some names: Marius Hobbhahn, Lee Sharkey, Lucius Bushnaq et al
Estimated # FTEs: 14
Some outputs in 2023: Understanding strategic deception and deceptive alignment, Research on strategic deception, Causal Framework for AI Regulation and Auditing, non-public stuff
Critiques: No public critiques yet
Funded by: OpenPhil, SFF, Manifund, “multiple institutional and private funders”
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: >$2,000,000
Anthropic Assurance / Trust & Safety / RSP Evaluations / Interpretability
One-sentence summary: remain ahead of the capabilities curve/maintain ability to figure out what’s up with state of the art models, keep an updated risk profile, propagate flaws to relevant parties as they are discovered.
Theory of change: “hands-on experience building safe and aligned AI… We’ll invest in mechanistic interpretability because solving that would be awesome, and even modest success would help us detect risks before they become disasters. We’ll train near-cutting-edge models to study how interventions like RL from human feedback and model-based supervision succeed and fail, iterate on them, and study how novel capabilities emerge as models scale up. We’ll also share information so policy-makers and other interested parties can understand what the state of the art is like, and provide an example to others of how responsible labs can do safety-focused research.”
Some names: Chris Olah, Nina Rimsky, Tamera Lanham, Zac Hatfield-Dodds, Evan Hubinger
Estimated # FTEs: 60?
Some outputs in 2023: Moral Self-Correction in LLMs, LLM Generalization with Influence Functions
Critiques: of RSPs
Funded by: Google, Amazon, Menlo Ventures, Salesforce, Spark Capital, Sahin Boydas, Eric Schmidt, Jaan Tallinn, …
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~a hundred million dollars
FAR
One-sentence summary: a science of robustness / fault tolerant alignment is their stated aim, but they do lots of interpretability papers and other things.
Theory of change: make AI systems less exploitable and so prevent one obvious failure mode of helper AIs / superalignment / oversight: attacks on what is supposed to prevent attacks. In general, work on overlooked safety research others don’t do for structural reasons: too big for academia or independents, but not totally aligned with the interests of the labs (e.g. prototyping moonshots, embarrassing issues with frontier models).
Some names: Adam Gleave, Ben Goldhaber, Adrià Garriga-Alonso, Daniel Pandori
Estimated # FTEs: 10
Some outputs in 2023: papers; impressive adversarial finding
Critiques: tangential from Demski
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: $1,507,686 (2022 income)
Krueger Lab
One-sentence summary: misc. Understand Goodhart’s law; reward learning 2.0; demonstrating safety failures; understand DL generalization / learning dynamics.
Theory of change: misc. Improve theory and demos while steering policy to steer away from AGI risk.
Some names: David Krueger, Dima Krasheninnikov, Lauro Langosco
Estimated # FTEs: 12
Some outputs in 2023: a formal definition of goodharting, how LLMs weigh sources, grokking as double descent, proof-of-concept for an approach to automated interpretability (in review).
Critiques: ?
Funded by: SFF
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$1m
AI Objectives Institute (AOI)
One-sentence summary: Hard to classify. How to apply AI to enhancing human agency, individual agency and collective agency? what goals should scalable delegated intelligence be aligned to? how to use AI to improve the accountability of institutions?
Theory of change: Think about how regulation “aligns” corporations, and insights about how to safely integrate AI into society will come, as will insights into technical alignment questions. Develop socially beneficial AI now and it will improve chances of AI being beneficial in the long run, including by paths we haven’t even thought of yet.
Some names: Deger Turan, Matija Franklin, Peli Grietzer, Tushant Jha
Estimated # FTEs: 7 researchers
Some outputs in 2023: one OAA fork, plans for concrete tools with actual users
Critiques: ?
Funded by: Future of Life Institute, Plurality Institute, Survival and Flourishing Project, private individuals
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: in flux, low millions.
More meta
We don’t distinguish between massive labs, individual researchers, and sparsely connected networks of people working on similar stuff. The funding amounts and full time employee estimates might be a reasonable proxy.
The categories we chose have substantial overlap and see the “see also”s for closely related work.
I wanted this to be a straight technical alignment doc, but people pointed out that would exclude most work (e.g. evals and nonambitious interpretability, which are safety but not alignment) so I made it a technical AGI safety doc. Plus ça change.
The only selection criterion is “I’ve heard of it and >= 1 person was recently working on it”. I don’t go to parties so it’s probably a couple months behind.
Obviously this is the Year of Governance and Advocacy, but I exclude all this good work: by its nature it gets attention. I also haven’t sought out the notable amount by ordinary labs and academics who don’t frame their work as alignment. Nor the secret work.
You are unlikely to like my partition into subfields; here are others.
No one has read all of this material, including us. Entries are based on public docs or private correspondence where possible but the post probably still contains >10 inaccurate claims. Shouting at us is encouraged. If I’ve missed you (or missed the point), please draw attention to yourself.
If you enjoyed reading this, consider donating to Lightspeed, MATS, Manifund, or LTFF: some good work is bottlenecked by money, and some people specialise in giving away money to enable it.
Conflicts of interest: I wrote the whole thing without funding. I often work with ACS and PIBBSS and have worked with Team Shard. Lightspeed gave a nice open-ended grant to my org, Arb. CHAI once bought me a burrito.
If you’re interested in doing or funding this sort of thing, get in touch at hi@arbresearch.com. I never thought I’d end up as a journalist, but stranger things will happen.
Thanks to Alex Turner, Neel Nanda, Jan Kulveit, Adam Gleave, Alexander Gietelink Oldenziel, Marius Hobbhahn, Lauro Langosco, Steve Byrnes, Henry Sleight, Raymond Douglas, Robert Kirk, Yudhister Kumar, Quratulain Zainab, Tomáš Gavenčiak, Joel Becker, Lucy Farnik, Oliver Hayman, Sammy Martin, Jess Rumbelow, Jean-Stanislas Denain, Ulisse Mini, David Mathers, Chris Lakin, Vojta Kovařík, Zach Stein-Perlman, and Linda Linsefors for helpful comments.
Appendices
Appendix: Prior enumerations
Everitt et al (2018)
Ji (2023)
Soares (2023)
Gleave and MacLean (2023)
Krakovna and Shah on Deepmind (2023)
AI Plans (2023), mostly irrelevant
Macdermott (2023)
Kirchner et al (2022): unsupervised analysis
Krakovna (2022), Paradigms of AI alignment
Hubinger (2020)
Perret (2020)
Nanda (2021)
Larsen (2022)
Zoellner (2022)
McDougall (2022)
Sharkey et al (2022) on interp
Koch (2020)
Critch (2020)
Hubinger on types of interpretability (2022)
Tai_safety_bibliography (2021)
Akash and Larsen (2022)
Karnofsky prosaic plan (2022)
Steinhardt (2019)
Russell (2016)
FLI (2017)
Shah (2020)
things which claim to be agendas
Tekofsky listing indies (2023)
This thing called boundaries (2023)
Sharkey et al (2022), mech interp
FLI Governance scorecard
The money
Appendix: Graveyard
Ambitious value learning?
MIRI youngbloods (see Hebbar)
JW selection theorems??
Provably Beneficial Artificial Intelligence (but see Open Agency and Omohundro)
HCH (see QACI)
Iterated distillation and amplification / Iterated amplification / IDA → Critiques and recursive reward modelling
Debate is now called Critiques and ERO
Market-making (Hubinger)
Logical inductors
Conditioning Predictive Models: Risks and Strategies?
Impact measures, conservative agency, side effects → “power aversion”
Acceptability Verification
Quantilizers
Redwood interp?
AI Safety Hub
Enabling Robots to Communicate their Objectives (early stage interp?)
Aligning narrowly superhuman models (Cotra idea; tiny followup; lives on as scalable oversight?)
automation of semantic interpretability
i.e. automatically proposing hypotheses instead of just automatically verifying them
Oracle AI is not dead so much as ~everything LLM falls under its purview
Tool AI, similarly, ~is LLMs plugged into a particular interface
EleutherAI: #accelerating-alignment—AI alignment assistants are live, but it doesn’t seem like EleutherAI is currently working on this
Appendix: Biology for AI alignment
Lots of agendas but not clear if anyone besides Byrnes and Thiergart are actively turning the crank. Seems like it would need a billion dollars.
Human enhancement
One-sentence summary: maybe we can give people new sensory modalities, or much higher bandwidth for conceptual information, or much better idea generation, or direct interface with DL systems, or direct interface with sensors, or transfer learning, and maybe this would help. The old superbaby dream goes here I suppose.
Theory of change: maybe this makes us better at alignment research
Merging
One-sentence summary: maybe we can form networked societies of DL systems and brains
Theory of change: maybe this lets us preserve some human values through bargaining or voting or weird politics.
Cyborgism, Millidge, Dupuis. Davidad sometimes.
As alignment aid
One-sentence summary: maybe we can get really high-quality alignment labels from brain data, maybe we can steer models by training humans to do activation engineering fast and intuitively, maybe we can crack the true human reward function / social instincts and maybe adapt some of them for AGI.
Theory of change: as you’d guess
Some names: Byrnes, Cvitkovic, Foresight’s BCI, Also (list from Byrnes): Eli Sennesh, Adam Safron, Seth Herd, Nathan Helm-Burger, Jon Garcia, Patrick Butlin
Appendix: Research support orgs
One slightly confusing class of org is described by the sample {CAIF, FLI}. Often run by active researchers with serious alignment experience, but usually not following an obvious agenda, delegating a basket of strategies to grantees, doing field-building stuff like NeurIPS workshops and summer schools.
CAIF
One-sentence summary: support researchers making differential progress in cooperative AI (eg precommitment mechanisms that can’t be used to make threats)
Some names: Lewis Hammond
Estimated # FTEs: 3
Some outputs in 2023: Neurips contest, summer school
Funded by: Polaris Ventures
Critiques:
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: £2,423,943
AISC
One-sentence summary: entrypoint for new researchers to test fit and meet collaborators. More recently focussed on a capabilities pause. Still going!
Some names: Remmelt Ellen, Linda Linsefors
Estimated # FTEs: 2
Some outputs in 2023: tag
Funded by: ?
Critiques: ?
Funded by: ?
Trustworthy command, closure, opsec, common good, alignment mindset: ?
Resources: ~$200,000
See also:
FLI
AISS
PIBBSS
Gladstone AI
Apart
Catalyze
EffiSciences
Students (Oxford, Harvard, Groningen, Bristol, Cambridge, Stanford, Delft, MIT)
Appendix: Meta, mysteries, more
Metaphilosophy (2023) – Wei Dai, probably <1 FTE
Encultured—making a game
Bricman: https://compphil.github.io/truth/
Algorithmic Alignment
McIlrath: side effects and human-aware AI
how hard is alignment?
Intriguing
The UK government now has an evals/alignment lab of sorts (Xander Davies, Nitarshan Rajkumar, Krueger, Rumman Chowdhury)
The US government will soon have an evals/alignment lab of sorts, USAISI. (Elham Tabassi?)
A Roadmap for Robust End-to-End Alignment (2018)
Not theory but cool
Greene Lab Cognitive Oversight? Closest they’ve got is AI governance and they only got $13,800
GATO Framework seems to have a galaxy-brained global coordination and/or self-aligning AI angle
Wentworth 2020 - a theory of abstraction suitable for embedded agency.
https://www.aintelope.net/
ARAAC—Make RL agents that can justify themselves (or other agents with their code as input) in human language at various levels of abstraction (AI apology, explainable RL for broad XAI)
Modeling Cooperation
Sandwiching (ex)
Tripwires
Meeseeks AI
Shutdown-seeking AI
Decision theory (Levinstein)
CAIS: Machine ethics—Model intrinsic goods and normative factors, vs mere task preferences, as these will be relevant even under extreme world changes; helps us avoid proxy misspecification as well as value lock-in. Present state of research unknown (some relevant bits in RepEng).
CAIS: Power Aversion—incentives to models to avoid gaining more power than necessary. Related to mild optimisation and powerseeking. Present state of research unknown.
Unless you zoom out so far that you reach vague stuff like “ontology identification”. We will see if this total turnover is true again in 2028; I suspect a couple will still be around, this time.
> one can posit neural network interpretability as the GiveDirectly of AI alignment: reasonably tractable, likely helpful in a large class of scenarios, with basically unlimited scaling and only slowly diminishing returns. And just as any new EA cause area must pass the first test of being more promising than GiveDirectly, so every alignment approach could be viewed as a competitor to interpretability work. – Niplav