(Not a take, just pulling out infographics and quotes for future reference from the new DeepMind paper outlining their approach to technical AGI safety and security)
Overview of risk areas, grouped by factors that drive differences in mitigation approaches:
Overview of their approach to mitigating misalignment:
Overview of their approach to mitigating misuse:
Path to deceptive alignment:
How to use interpretability:
Goal
Understanding v Control
Confidence
Concept v Algorithm
(Un)supervised?
How context specific?
Alignment evaluations
Understanding
Any
Concept+
Either
Either
FaithfulReasoning
Understanding∗
Any
Concept+
Supervised+
Either
DebuggingFailures
Understanding∗
Low
Either
Unsupervised+
Specific
Monitoring
Understanding
Any
Concept+
Supervised+
General
Red teaming
Either
Low
Either
Unsupervised+
Specific
Amplified oversight
Understanding
Complicated
Concept
Either
Specific
Interpretability techniques:
Technique
Understanding v Control
Confidence
Concept v Algorithm
(Un)supervised?
How specific?
Scalability
Probing
Understanding
Low
Concept
Supervised
Specific-ish
Cheap
Dictionary learning
Both
Low
Concept
Unsupervised
General∗
Expensive
Steering vectors
Control
Low
Concept
Supervised
Specific-ish
Cheap
Training data attribution
Understanding
Low
Concept
Unsupervised
General∗
Expensive
Auto-interp
Understanding
Low
Concept
Unsupervised
General∗
Cheap
Component Attribution
Both
Medium
Concept
Complicated
Specific
Cheap
Circuit analysis (causal)
Understanding
Medium
Algorithm
Complicated
Specific
Expensive
Assorted random stuff that caught my attention:
They consider Exceptional AGI (Level 4) from Morris et al. (2023), defined as an AI system that matches or exceeds that of the 99th percentile of skilled adults on a wide range of non-physical tasks (contra the Metaculus “when AGI?” question that has diverse robotic capabilities, so their 2030 is probably an overestimate)
The irrelevance of physical limits to the paper’s scope: “By considering the construction of “the ultimate laptop”, Lloyd (2000) suggests that Moore’s law (formalized as an 18 month doubling) cannot last past 2250. Krauss and Starkman (2004) consider limits on the total computation achievable by any technological civilization in our expanding universe—this approach imposes a (looser) 600-year limit in Moore’s law. However, since we are very far from these limits, we do not expect them to have a meaningful impact on timelines to Exceptional AGI”
Structural risks are “out of scope of this paper” because they’re “a much bigger category, often with each risk requiring a bespoke approach. They are also much harder for an AI developer to address, as they often require new norms or institutions to shape powerful dynamics in the world” (although “much of the technical work discussed in this paper will also be relevant for structural risks”)
Mistakes are also out of scope because “standard safety engineering practices (e.g. testing) can drastically reduce risks, and should be similarly effective for averting AI mistakes as for human mistakes… so we believe that severe harm from AI mistakes will be significantly less likely than misuse or misalignment, and is further reducible through appropriate safety practices”
The paper focuses “primarily on techniques that can be integrated into current AI development, due to our focus on anytime approaches to safety” i.e. excludes “research bets that pay out over longer periods of time but can provide increased safety, such as agent foundations, science of deep learning, and application of formal methods to AI”
Algorithmic progress papers: “Erdil and Besiroglu (2022) sought to decompose AI progress in a way that can be attributed to the separate factors of scaling (compute, model size and data) and algorithmic innovation, and concluded that algorithmic progress doubles effective compute budgets roughly every nine months. Ho et al. (2024) further extend this approach to study algorithmic improvements in the pretraining of language models for the period of 2012 − 2023. During this period, the authors estimate that the compute required to reach a set performance threshold halved approximately every eight months”
Explosive economic growth paper: “Recent modeling by Erdil et al. (2025) that draws on empirical scaling laws and semi-endogenous growth theory and models changes in compute, automation and production supports the plausibility of very rapid growth in Gross World Product (e.g. exceeding 30% per year in 2045) when adopting parameters from empirical data, existing literature and reasoned judgment” (I’m still wondering how this will get around johnswentworth’s objection to using GDP to track this)
General competence scales smoothly with compute: “Owen (2024) find that aggregate benchmarks (BIG-Bench (Srivastava et al., 2023), MMLU (Hendrycks et al., 2020)) are predictable with up to 20 percentage points of error when extrapolating through one order of magnitude (OOM) of compute. Gadre et al. (2024) similarly find that aggregate task performance can be predicted with relatively high accuracy, predicting average top-1 error across 17 tasks to within 1 percentage point using 20× less compute than is used for the predicted model. Ruan et al. (2024) find that 8 standard downstream LLM benchmark scores across many model families are well-explained in terms of their top 3 principal components. Their first component scales smoothly across 5 OOMs of compute and many model families, suggesting that something like general competence scales smoothly with compute”
“given that total labor compensation represents over 50% of global GDP (International Labour Organisation, 2022), it is clear that the economic incentive for automation is extraordinarily large”
(Not a take, just pulling out infographics and quotes for future reference from the new DeepMind paper outlining their approach to technical AGI safety and security)
Overview of risk areas, grouped by factors that drive differences in mitigation approaches:
Overview of their approach to mitigating misalignment:
Overview of their approach to mitigating misuse:
Path to deceptive alignment:
How to use interpretability:
Interpretability techniques:
learning
vectors
attribution
Attribution
(causal)
Assorted random stuff that caught my attention:
They consider Exceptional AGI (Level 4) from Morris et al. (2023), defined as an AI system that matches or exceeds that of the 99th percentile of skilled adults on a wide range of non-physical tasks (contra the Metaculus “when AGI?” question that has diverse robotic capabilities, so their 2030 is probably an overestimate)
The irrelevance of physical limits to the paper’s scope: “By considering the construction of “the ultimate laptop”, Lloyd (2000) suggests that Moore’s law (formalized as an 18 month doubling) cannot last past 2250. Krauss and Starkman (2004) consider limits on the total computation achievable by any technological civilization in our expanding universe—this approach imposes a (looser) 600-year limit in Moore’s law. However, since we are very far from these limits, we do not expect them to have a meaningful impact on timelines to Exceptional AGI”
Structural risks are “out of scope of this paper” because they’re “a much bigger category, often with each risk requiring a bespoke approach. They are also much harder for an AI developer to address, as they often require new norms or institutions to shape powerful dynamics in the world” (although “much of the technical work discussed in this paper will also be relevant for structural risks”)
Mistakes are also out of scope because “standard safety engineering practices (e.g. testing) can drastically reduce risks, and should be similarly effective for averting AI mistakes as for human mistakes… so we believe that severe harm from AI mistakes will be significantly less likely than misuse or misalignment, and is further reducible through appropriate safety practices”
The paper focuses “primarily on techniques that can be integrated into current AI development, due to our focus on anytime approaches to safety” i.e. excludes “research bets that pay out over longer periods of time but can provide increased safety, such as agent foundations, science of deep learning, and application of formal methods to AI”
Algorithmic progress papers: “Erdil and Besiroglu (2022) sought to decompose AI progress in a way that can be attributed to the separate factors of scaling (compute, model size and data) and algorithmic innovation, and concluded that algorithmic progress doubles effective compute budgets roughly every nine months. Ho et al. (2024) further extend this approach to study algorithmic improvements in the pretraining of language models for the period of 2012 − 2023. During this period, the authors estimate that the compute required to reach a set performance threshold halved approximately every eight months”
Explosive economic growth paper: “Recent modeling by Erdil et al. (2025) that draws on empirical scaling laws and semi-endogenous growth theory and models changes in compute, automation and production supports the plausibility of very rapid growth in Gross World Product (e.g. exceeding 30% per year in 2045) when adopting parameters from empirical data, existing literature and reasoned judgment” (I’m still wondering how this will get around johnswentworth’s objection to using GDP to track this)
General competence scales smoothly with compute: “Owen (2024) find that aggregate benchmarks (BIG-Bench (Srivastava et al., 2023), MMLU (Hendrycks et al., 2020)) are predictable with up to 20 percentage points of error when extrapolating through one order of magnitude (OOM) of compute. Gadre et al. (2024) similarly find that aggregate task performance can be predicted with relatively high accuracy, predicting average top-1 error across 17 tasks to within 1 percentage point using 20× less compute than is used for the predicted model. Ruan et al. (2024) find that 8 standard downstream LLM benchmark scores across many model families are well-explained in terms of their top 3 principal components. Their first component scales smoothly across 5 OOMs of compute and many model families, suggesting that something like general competence scales smoothly with compute”
“given that total labor compensation represents over 50% of global GDP (International Labour Organisation, 2022), it is clear that the economic incentive for automation is extraordinarily large”