We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief,
The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions.
The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist—Machine Learning, Research Scientist—Cognitive Science, Research Engineer, and Software Engineer positions.
We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions.
“Alignment” vs “Scalable Alignment”
Very roughly, the split between Alignment and Scalable Alignment reflects the following decomposition:
Generate approaches to AI alignment – Alignment Team
Make those approaches scale – Scalable Alignment Team
In practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available.
Scalable Alignment’s current approach: make AI critique itself
Imagine a default approach to building AI agents that do what humans want:
Pretrain on a task like “predict text from the internet”, producing a highly capable model such as Chinchilla or Flamingo.
Fine-tune into an agent that does useful tasks, as evaluated by human judgements.
There are several ways this could go wrong:
Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task.
The agent’s reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we don’t like, and because we expect good reasoning to better generalize to other situations.
Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations.
Our current plan to address these problem is (in part):
Give humans help in supervising strong agents: On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence.
Align explanations with the true reasoning process of the agent: Ensure that agent’s are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge.
Red team models to exhibit failure modes that don’t occur in normal use
We believe none of these pieces are sufficient by themselves:
(1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer.
(2) without (1) doesn’t scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even with compression, or exponentially large without compression if the agent is using advanced heuristics which expand into very large human-interpretable reasoning traces.
(1)+(2) without (3) will miss rare failures.
(3) needs (1)+(2) to define failure.
An example proposal for (1) is debate, in which two agents are trained in a zero-sum game to provide evidence and counterarguments for answers, as evaluated by a human judge. If we imagine the exponentially large tree of all possible debates, the goals of debate are to (1) engineer the whole tree so that it captures all relevant considerations and (2) train agents so that the chosen single path through the tree reflects the tree as a whole.
The full picture will differ from the pure debate setting in many ways, and we believe the correct interpretation of the debate idea is “agents should critique themselves”. There is a large space of protocols that include agents critiquing agents as a component, and choosing between them will involve
Flexibility in integrating other components of alignment: For example, if strong interpretability tools are developed they should be smoothly integrated into the human-machine interaction, so that the human supervision process has access to internal reasoning.
Governance: By explaining themselves, agents can provide a lever for external oversight.
The three goals of “help humans with supervision”, “align explanations with reasoning”, and “red teams” will be blurry once we put the whole picture together. Red teaming can occur either standalone or as an integrated part of a training scheme such as cross-examination, which allows agents to interrogate opponent behavior along counterfactual trajectories. Stronger schemes to help humans with supervision should improve alignment with reasoning by themselves, as they grow the space of considerations that can be exposed to humans. Thus, a key part of the Scalable Alignment Team’s work is planning out how these pieces will fit together.
Examples of our work, involving extensive collaboration with other teams at DeepMind:
Risk analyses, both for long-term alignment risks and harms that exist today:
We view our recent safety papers as steps towards the broader scalable alignment picture, and continue to build out towards debate and generalizations. We work primarily with large language models (LLMs), both because LLMs are a tool for safety by enabling human-machine communication and are examples of ML models that may cause both near-term and long-term harms.
Alignment Team’s portfolio of projects
In contrast to the Scalable Alignment Team, the Alignment Team explores a wide variety of possible angles on the AI alignment problem. Relative to Scalable Alignment, we check whether a technique could plausibly scale based on conceptual and abstract arguments. This lets us iterate much faster at the cost of getting less useful feedback from reality. To give you a sense of the variety, here’s some examples of public past work that was led by current team members:
That being said, over the last year there has been some movement away from previous research topics and towards others. To get a sense of our current priorities, here are short descriptions of some projects that we are currently working on:
Primarily conceptual:
Investigate threat models in which due to increasing AI sophistication, humans are forced to rely on evaluations of outcomes (rather than evaluations of process or reasoning).
Investigate arguments about the difficulty of AI alignment, including as a subproblem the likelihood that various AI alignment plans succeed.
Compare various decompositions of the alignment problem to see which one is most useful for guiding future work.
Primarily empirical:
Create demonstrations of inner alignment failures, in a similar style as this paper.
Dig deeper into the grokking phenomenon and give a satisfying account of how and why it happens.
Develop interpretability tools that allow us to understand how large language models work (along similar lines as Anthropic’s work).
Evaluate how useful process-based feedback is on an existing benchmark.
Relative to most other teams at DeepMind, on the Alignment team there is quite a lot of freedom in what you work on. All you need to do to start a project is to convince your manager that it’s worth doing (i.e. reduces x-risk comparably well to other actions you could take), and convince enough collaborators to work on the project.
In many ways the team is a collection of people with very different research agendas and perspectives on AI alignment that you wouldn’t normally expect to work together. What ties us together is our meta-level focus on reducing existential risk through alignment failures:
Every new project must come accompanied by a theory of change that explains how it reduces existential risk; this helps us avoid the failure mode of working on interesting conceptual projects that end up not connecting to the situations we are worried about.
It’s encouraged to talk to people on the team with very different perspectives and try to come to agreement, or at least better understand each other’s positions. This can be an explicit project even though it isn’t “research” in the traditional sense.
Interfacing with the rest of DeepMind
Both Alignment and Scalable Alignment collaborate extensively with people across DeepMind.
For Alignment, this includes both collaborating on projects that we think are useful, and by explaining our ideas to other researchers. As a particularly good example, we recently ran a 2 hour AI alignment “workshop” with over 100 attendees. (That being said, you can opt out of these engagements in order to focus on research, if you prefer.)
As Scalable Alignment’s work with large language models is very concrete, we have tight collaborations with a variety of teams, including large-scale pretraining and other language teams, Ethics and Society, and Strategy and Governance.
The roles
Between our two teams we have open roles for Research Scientists (RSs), Research Engineers (REs), and (for Scalable Alignment) Software Engineers. Scalable Alignment RSs can have either a machine learning background or a cognitive science background (or equivalent). The boundaries between these roles are blurry. There are many skills involved in overall Alignment / Scalable Alignment research success: proposing and leading projects, writing and publishing papers, conceptual safety work, algorithm design and implementation, experiment execution and tuning, design and implementation of flexible, high-performance, maintainable software, and design and analysis of human interaction experiments.
We want to hire from the Pareto frontier of all relevant skills. This means RSs are expected to have more research experience and more of a track record of papers, but SWEs are expected to be better at scalable software design / collaboration / implementation, with REs in between, but also that REs can and do propose and lead projects if capable (e.g., this recent paper had an RE as last author). For more details on the tradeoffs, see the career section of Rohin’s FAQ.
For Scalable Alignment, most of our work focuses on large language models. For Machine Learning RSs, this means experience with natural language processing is valuable, but not required. We are also interested in candidates motivated by other types of harms caused by large models, such as those described in Weidinger et al., Ethical and social risks of harm from language models, as long as you are excited by the goal of removing such harms even in subtle cases which humans have difficulty detecting. For REs and SWEs, a focus on large language models means that experience with high performance computation or large, many-developer codebases is valuable. For the RE role for Alignment, many of the projects you could work on would involve smaller models that are less of an engineering challenge, though there are still a few projects that work with our largest language models.
Scalable Alignment Cognitive Scientists are expected to have a track record of research in cognitive science, and to design, lead, and implement either standalone human-only experiments to probe uncertainty, or the human interaction components of mixed human / machine experiments. No experience with machine learning is required, but you should be excited to collaborate with people who do!
Apply now!
We will be evaluating applications on a rolling basis until positions are filled, but we will at least consider all applications that we receive by May 31. Please do apply even if your start date is up to a year in the future, as we probably will not run another hiring round this year. These roles are based in London, with a hybrid work-from-office / work-from-home model. International applications are welcome as long as you are willing to relocate to London.
While we do expect these roles to be competitive, we have found that people often overestimate how much we are looking for. In particular:
We do not expect you to have a PhD if you are applying for the Research Engineer or Software Engineer roles. Even for the Research Scientist role, it is fine if you don’t have a PhD if you can demonstrate comparable research skill (though we do not expect to see such candidates in practice).
We do not expect you to have read hundreds of blog posts and papers about AI alignment, or to have a research agenda that aims to fully solve AI alignment. We will look for understanding of the basic motivation for AI alignment, and the ability to reason conceptually about future AI systems that we haven’t yet built.
If we ask you, say, whether an assistive agent would gradient hack if it learned about its own training process, we’re looking to see how you go about thinking about a confusing and ill-specified question (which happens all the time in alignment research). We aren’t expecting you to give us the Correct Answer, and in fact there isn’t a correct answer; the question isn’t specified well enough for that. We aren’t even expecting you to know all the terms; it would be fine to ask what we mean by “gradient hacking”.
As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.
We do not expect SWE candidates to have experience with ML, but you should have experience with high performance code and experience with large, collaborative codebases (including the human aspects of collaborative software projects).
DeepMind is hiring for the Scalable Alignment and Alignment Teams
We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief,
The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions.
The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist—Machine Learning, Research Scientist—Cognitive Science, Research Engineer, and Software Engineer positions.
We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions.
“Alignment” vs “Scalable Alignment”
Very roughly, the split between Alignment and Scalable Alignment reflects the following decomposition:
Generate approaches to AI alignment – Alignment Team
Make those approaches scale – Scalable Alignment Team
In practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available.
Scalable Alignment’s current approach: make AI critique itself
Imagine a default approach to building AI agents that do what humans want:
Pretrain on a task like “predict text from the internet”, producing a highly capable model such as Chinchilla or Flamingo.
Fine-tune into an agent that does useful tasks, as evaluated by human judgements.
There are several ways this could go wrong:
Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task.
The agent’s reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we don’t like, and because we expect good reasoning to better generalize to other situations.
Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations.
Our current plan to address these problem is (in part):
Give humans help in supervising strong agents: On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence.
Align explanations with the true reasoning process of the agent: Ensure that agent’s are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge.
Red team models to exhibit failure modes that don’t occur in normal use
We believe none of these pieces are sufficient by themselves:
(1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer.
(2) without (1) doesn’t scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even with compression, or exponentially large without compression if the agent is using advanced heuristics which expand into very large human-interpretable reasoning traces.
(1)+(2) without (3) will miss rare failures.
(3) needs (1)+(2) to define failure.
An example proposal for (1) is debate, in which two agents are trained in a zero-sum game to provide evidence and counterarguments for answers, as evaluated by a human judge. If we imagine the exponentially large tree of all possible debates, the goals of debate are to (1) engineer the whole tree so that it captures all relevant considerations and (2) train agents so that the chosen single path through the tree reflects the tree as a whole.
The full picture will differ from the pure debate setting in many ways, and we believe the correct interpretation of the debate idea is “agents should critique themselves”. There is a large space of protocols that include agents critiquing agents as a component, and choosing between them will involve
Human aspects: Whether a particular human-machine interaction is aligned depends on the humans involved, and we need cognitive science experiments probing these uncertainties.
Theoretical strengthenings: There are a variety of theoretical strengthenings to debate such as cross-examination, learning the prior, and market making, but little work has been done to turn these into practical systems (example obstacle to learning the prior).
Flexibility in integrating other components of alignment: For example, if strong interpretability tools are developed they should be smoothly integrated into the human-machine interaction, so that the human supervision process has access to internal reasoning.
Practicalities: Citing sources, better uncertainty estimation, declining to answer if uncertain, etc.
Governance: By explaining themselves, agents can provide a lever for external oversight.
The three goals of “help humans with supervision”, “align explanations with reasoning”, and “red teams” will be blurry once we put the whole picture together. Red teaming can occur either standalone or as an integrated part of a training scheme such as cross-examination, which allows agents to interrogate opponent behavior along counterfactual trajectories. Stronger schemes to help humans with supervision should improve alignment with reasoning by themselves, as they grow the space of considerations that can be exposed to humans. Thus, a key part of the Scalable Alignment Team’s work is planning out how these pieces will fit together.
Examples of our work, involving extensive collaboration with other teams at DeepMind:
Risk analyses, both for long-term alignment risks and harms that exist today:
Kenton et al. 2021, Alignment of language agents
Weidinger et al. 2021, Ethical and social risks of harm from language models
Language model pretraining, analysis, and safety discussion
Rae et al. 2021, Scaling language models: Methods, analysis & insights from training Gopher
Borgeaud et al. 2021, Improving language models by retrieving from trillions of tokens
Safety
Perez et al. 2022, Red teaming language models with language models
Gleave and Irving 2022, Uncertainty Estimation for Language Reward Models
Menick et al. 2022, Teaching language models to support answers with verified quotes
Earlier proposals for debate and human aspects of debate
Irving et al. 2018, AI safety via debate
Irving and Askell 2019, AI safety needs social scientists
We view our recent safety papers as steps towards the broader scalable alignment picture, and continue to build out towards debate and generalizations. We work primarily with large language models (LLMs), both because LLMs are a tool for safety by enabling human-machine communication and are examples of ML models that may cause both near-term and long-term harms.
Alignment Team’s portfolio of projects
In contrast to the Scalable Alignment Team, the Alignment Team explores a wide variety of possible angles on the AI alignment problem. Relative to Scalable Alignment, we check whether a technique could plausibly scale based on conceptual and abstract arguments. This lets us iterate much faster at the cost of getting less useful feedback from reality. To give you a sense of the variety, here’s some examples of public past work that was led by current team members:
Learning objectives from human feedback on hypothetical behavior
Understanding agent incentives using causal influence diagrams
Examples of specification gaming
Eliciting latent knowledge contest
Avoiding side effects through impact regularization
Improving our philosophical understanding of “agency” using Conway’s game of life
Relating specification problems and Goodhart’s Law
Decoupling approval from actions to avoid tampering
That being said, over the last year there has been some movement away from previous research topics and towards others. To get a sense of our current priorities, here are short descriptions of some projects that we are currently working on:
Primarily conceptual:
Investigate threat models in which due to increasing AI sophistication, humans are forced to rely on evaluations of outcomes (rather than evaluations of process or reasoning).
Investigate arguments about the difficulty of AI alignment, including as a subproblem the likelihood that various AI alignment plans succeed.
Compare various decompositions of the alignment problem to see which one is most useful for guiding future work.
Primarily empirical:
Create demonstrations of inner alignment failures, in a similar style as this paper.
Dig deeper into the grokking phenomenon and give a satisfying account of how and why it happens.
Develop interpretability tools that allow us to understand how large language models work (along similar lines as Anthropic’s work).
Evaluate how useful process-based feedback is on an existing benchmark.
Relative to most other teams at DeepMind, on the Alignment team there is quite a lot of freedom in what you work on. All you need to do to start a project is to convince your manager that it’s worth doing (i.e. reduces x-risk comparably well to other actions you could take), and convince enough collaborators to work on the project.
In many ways the team is a collection of people with very different research agendas and perspectives on AI alignment that you wouldn’t normally expect to work together. What ties us together is our meta-level focus on reducing existential risk through alignment failures:
Every new project must come accompanied by a theory of change that explains how it reduces existential risk; this helps us avoid the failure mode of working on interesting conceptual projects that end up not connecting to the situations we are worried about.
It’s encouraged to talk to people on the team with very different perspectives and try to come to agreement, or at least better understand each other’s positions. This can be an explicit project even though it isn’t “research” in the traditional sense.
Interfacing with the rest of DeepMind
Both Alignment and Scalable Alignment collaborate extensively with people across DeepMind.
For Alignment, this includes both collaborating on projects that we think are useful, and by explaining our ideas to other researchers. As a particularly good example, we recently ran a 2 hour AI alignment “workshop” with over 100 attendees. (That being said, you can opt out of these engagements in order to focus on research, if you prefer.)
As Scalable Alignment’s work with large language models is very concrete, we have tight collaborations with a variety of teams, including large-scale pretraining and other language teams, Ethics and Society, and Strategy and Governance.
The roles
Between our two teams we have open roles for Research Scientists (RSs), Research Engineers (REs), and (for Scalable Alignment) Software Engineers. Scalable Alignment RSs can have either a machine learning background or a cognitive science background (or equivalent). The boundaries between these roles are blurry. There are many skills involved in overall Alignment / Scalable Alignment research success: proposing and leading projects, writing and publishing papers, conceptual safety work, algorithm design and implementation, experiment execution and tuning, design and implementation of flexible, high-performance, maintainable software, and design and analysis of human interaction experiments.
We want to hire from the Pareto frontier of all relevant skills. This means RSs are expected to have more research experience and more of a track record of papers, but SWEs are expected to be better at scalable software design / collaboration / implementation, with REs in between, but also that REs can and do propose and lead projects if capable (e.g., this recent paper had an RE as last author). For more details on the tradeoffs, see the career section of Rohin’s FAQ.
For Scalable Alignment, most of our work focuses on large language models. For Machine Learning RSs, this means experience with natural language processing is valuable, but not required. We are also interested in candidates motivated by other types of harms caused by large models, such as those described in Weidinger et al., Ethical and social risks of harm from language models, as long as you are excited by the goal of removing such harms even in subtle cases which humans have difficulty detecting. For REs and SWEs, a focus on large language models means that experience with high performance computation or large, many-developer codebases is valuable. For the RE role for Alignment, many of the projects you could work on would involve smaller models that are less of an engineering challenge, though there are still a few projects that work with our largest language models.
Scalable Alignment Cognitive Scientists are expected to have a track record of research in cognitive science, and to design, lead, and implement either standalone human-only experiments to probe uncertainty, or the human interaction components of mixed human / machine experiments. No experience with machine learning is required, but you should be excited to collaborate with people who do!
Apply now!
We will be evaluating applications on a rolling basis until positions are filled, but we will at least consider all applications that we receive by May 31. Please do apply even if your start date is up to a year in the future, as we probably will not run another hiring round this year. These roles are based in London, with a hybrid work-from-office / work-from-home model. International applications are welcome as long as you are willing to relocate to London.
While we do expect these roles to be competitive, we have found that people often overestimate how much we are looking for. In particular:
We do not expect you to have a PhD if you are applying for the Research Engineer or Software Engineer roles. Even for the Research Scientist role, it is fine if you don’t have a PhD if you can demonstrate comparable research skill (though we do not expect to see such candidates in practice).
We do not expect you to have read hundreds of blog posts and papers about AI alignment, or to have a research agenda that aims to fully solve AI alignment. We will look for understanding of the basic motivation for AI alignment, and the ability to reason conceptually about future AI systems that we haven’t yet built.
If we ask you, say, whether an assistive agent would gradient hack if it learned about its own training process, we’re looking to see how you go about thinking about a confusing and ill-specified question (which happens all the time in alignment research). We aren’t expecting you to give us the Correct Answer, and in fact there isn’t a correct answer; the question isn’t specified well enough for that. We aren’t even expecting you to know all the terms; it would be fine to ask what we mean by “gradient hacking”.
As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you.
We do not expect SWE candidates to have experience with ML, but you should have experience with high performance code and experience with large, collaborative codebases (including the human aspects of collaborative software projects).
Go forth and apply!
Alignment Team:
Research Scientist
Research Engineer
Scalable Alignment Team:
Research Scientist—Machine Learning
Research Scientist—Cognitive Science
Research Engineer
Software Engineer