Draft report on AI timelines
Hi all, I’ve been working on some AI forecasting research and have prepared a draft report on timelines to transformative AI. I would love feedback from this community, so I’ve made the report viewable in a Google Drive folder here.
With that said, most of my focus so far has been on the high-level structure of the framework, so the particular quantitative estimates are very much in flux and many input parameters aren’t pinned down well—I wrote the bulk of this report before July and have received feedback since then that I haven’t fully incorporated yet. I’d prefer if people didn’t share it widely in a low-bandwidth way (e.g., just posting key graphics on Facebook or Twitter) since the conclusions don’t reflect Open Phil’s “institutional view” yet, and there may well be some errors in the report.
The report includes a quantitative model written in Python. Ought has worked with me to integrate their forecasting platform Elicit into the model so that you can see other people’s forecasts for various parameters. If you have questions or feedback about the Elicit integration, feel free to reach out to elicit@ought.org.
Looking forward to hearing people’s thoughts!
- “Long-Termism” vs. “Existential Risk” by Apr 6, 2022, 9:41 PM; 524 points) (EA Forum;
- My take on What We Owe the Future by Sep 1, 2022, 6:07 PM; 354 points) (EA Forum;
- Why I think strong general AI is coming soon by Sep 28, 2022, 5:40 AM; 336 points) (
- AI Timelines by Nov 10, 2023, 5:28 AM; 300 points) (
- Important, actionable research questions for the most important century by Feb 24, 2022, 4:34 PM; 298 points) (EA Forum;
- Two-year update on my personal AI timelines by Aug 2, 2022, 11:07 PM; 293 points) (
- Announcing the Future Fund’s AI Worldview Prize by Sep 23, 2022, 4:28 PM; 255 points) (EA Forum;
- What a compute-centric framework says about AI takeoff speeds by Jan 23, 2023, 4:09 AM; 189 points) (EA Forum;
- What a compute-centric framework says about AI takeoff speeds by Jan 23, 2023, 4:02 AM; 187 points) (
- What will GPT-2030 look like? by Jun 7, 2023, 11:40 PM; 185 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by Dec 23, 2021, 2:06 PM; 176 points) (EA Forum;
- A personal take on longtermist AI governance by Jul 16, 2021, 10:08 PM; 173 points) (EA Forum;
- AI Could Defeat All Of Us Combined by Jun 9, 2022, 3:50 PM; 170 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by Dec 23, 2021, 2:06 PM; 168 points) (
- Your posts should be on arXiv by Aug 25, 2022, 10:35 AM; 156 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by Dec 21, 2020, 3:25 PM; 155 points) (EA Forum;
- Reply to Eliezer on Biological Anchors by Dec 23, 2021, 4:15 PM; 149 points) (
- AI Could Defeat All Of Us Combined by Jun 10, 2022, 11:25 PM; 143 points) (EA Forum;
- Clarifying and predicting AGI by May 4, 2023, 3:55 PM; 142 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by Dec 21, 2020, 3:27 PM; 142 points) (
- Report on Frontier Model Training by Aug 30, 2023, 8:02 PM; 122 points) (
- Policy ideas for mitigating AI risk by Sep 16, 2023, 10:31 AM; 121 points) (EA Forum;
- A proposed method for forecasting transformative AI by Feb 10, 2023, 7:34 PM; 121 points) (
- Views on when AGI comes and on strategy to reduce existential risk by Jul 8, 2023, 9:00 AM; 119 points) (
- Extrapolating GPT-N performance by Dec 18, 2020, 9:41 PM; 112 points) (
- Voting Results for the 2020 Review by Feb 2, 2022, 6:37 PM; 108 points) (
- Features that make a report especially helpful to me by Apr 12, 2022, 1:57 PM; 107 points) (EA Forum;
- A compute-based framework for thinking about the future of AI by May 31, 2023, 10:00 PM; 96 points) (EA Forum;
- Prizes for the 2020 Review by Feb 20, 2022, 9:07 PM; 94 points) (
- AI timelines by bio anchors: the debate in one place by Jul 30, 2022, 11:04 PM; 93 points) (EA Forum;
- Biological Anchors: The Trick that Might or Might Not Work by Aug 12, 2023, 12:53 AM; 91 points) (
- “Fixing Adolescence” as a Cause Area? by Jan 23, 2022, 10:57 AM; 89 points) (EA Forum;
- Tips for conducting worldview investigations by Apr 12, 2022, 7:28 PM; 88 points) (EA Forum;
- New blog: Cold Takes by Jul 13, 2021, 5:14 PM; 85 points) (EA Forum;
- AMA: Ajeya Cotra, researcher at Open Phil by Jan 28, 2021, 5:38 PM; 84 points) (EA Forum;
- Progress studies vs. longtermist EA: some differences by May 31, 2021, 9:35 PM; 84 points) (EA Forum;
- A Brief Review of Current and Near-Future Methods of Genetic Engineering by Apr 10, 2021, 7:16 PM; 82 points) (
- More Recent Progress in the Theory of Neural Networks by Oct 6, 2022, 4:57 PM; 82 points) (
- 2020 Review Article by Jan 14, 2022, 4:58 AM; 74 points) (
- A comment on Ajeya Cotra’s draft report on AI timelines by Feb 24, 2022, 12:41 AM; 74 points) (
- Roodman’s Thoughts on Biological Anchors by Sep 14, 2022, 12:23 PM; 73 points) (EA Forum;
- The Most Important Century: Sequence Introduction by Sep 3, 2021, 8:19 PM; 70 points) (
- Clarifying and predicting AGI by May 4, 2023, 3:56 PM; 69 points) (EA Forum;
- Aptitudes for AI governance work by Jun 13, 2023, 1:54 PM; 68 points) (EA Forum;
- Paths To High-Level Machine Intelligence by Sep 10, 2021, 1:21 PM; 68 points) (
- Transformative AI and Compute [Summary] by Sep 23, 2021, 1:53 PM; 65 points) (EA Forum;
- Brain-inspired AGI and the “lifetime anchor” by Sep 29, 2021, 1:09 PM; 65 points) (
- How does bee learning compare with machine learning? by Mar 4, 2021, 1:59 AM; 64 points) (
- Report on Whether AI Could Drive Explosive Economic Growth by Jun 25, 2021, 11:02 PM; 63 points) (EA Forum;
- Report on Semi-informative Priors for AI timelines (Open Philanthropy) by Mar 26, 2021, 5:46 PM; 62 points) (EA Forum;
- Forecasting progress in language models by Oct 28, 2021, 8:40 PM; 62 points) (
- $1,000 Squiggle Experimentation Challenge by Aug 4, 2022, 2:20 PM; 61 points) (EA Forum;
- Ajeya’s TAI timeline shortened from 2050 to 2040 by Aug 3, 2022, 12:00 AM; 59 points) (EA Forum;
- Projecting compute trends in Machine Learning by Mar 7, 2022, 3:32 PM; 59 points) (
- Analogies and General Priors on Intelligence by Aug 20, 2021, 9:03 PM; 57 points) (
- Forecasting Transformative AI: Are we “trending toward” transformative AI? (How would we know?) by Aug 24, 2021, 5:15 PM; 56 points) (EA Forum;
- Fractional progress estimates for AI timelines and implied resource requirements by Jul 15, 2021, 6:43 PM; 55 points) (
- [AN #166]: Is it crazy to claim we’re in the most important century? by Oct 8, 2021, 5:30 PM; 52 points) (
- We need a standard set of community advice for how to financially prepare for AGI by Jun 7, 2021, 7:24 AM; 51 points) (
- Forecasting transformative AI: the “biological anchors” method in a nutshell by Aug 31, 2021, 6:17 PM; 50 points) (EA Forum;
- Dan Luu on Futurist Predictions by Sep 14, 2022, 3:01 AM; 50 points) (
- What is Compute? - Transformative AI and Compute [1/4] by Sep 23, 2021, 1:54 PM; 48 points) (EA Forum;
- AI Safety 101 : Capabilities—Human Level AI, What? How? and When? by Mar 7, 2024, 5:29 PM; 46 points) (
- [AN #156]: The scaling hypothesis: a plan for building AGI by Jul 16, 2021, 5:10 PM; 46 points) (
- A review of the Bio-Anchors report by Oct 3, 2022, 10:27 AM; 45 points) (
- A Model-based Approach to AI Existential Risk by Aug 25, 2023, 10:32 AM; 45 points) (
- Grokking “Forecasting TAI with biological anchors” by Jun 6, 2022, 6:56 PM; 43 points) (EA Forum;
- Expected impact of a career in AI safety under different opinions by Jun 14, 2022, 2:25 PM; 42 points) (EA Forum;
- Forecasting Newsletter: September 2020. by Oct 1, 2020, 11:00 AM; 41 points) (EA Forum;
- Carl Shulman on the common-sense case for existential risk work and its practical implications by Oct 8, 2021, 1:43 PM; 41 points) (EA Forum;
- Superintelligent AI is possible in the 2020s by Aug 13, 2024, 6:03 AM; 41 points) (
- The Takeoff Speeds Model Predicts We May Be Entering Crunch Time by Feb 21, 2025, 2:26 AM; 41 points) (
- Technical AI Safety Research Landscape [Slides] by Sep 18, 2023, 1:56 PM; 41 points) (
- Apr 28, 2021, 2:04 PM; 40 points) 's comment on Why AI is Harder Than We Think—Melanie Mitchell by (EA Forum;
- Features that make a report especially helpful to me by Apr 14, 2022, 1:12 AM; 40 points) (
- Forecasting Compute—Transformative AI and Compute [2/4] by Oct 1, 2021, 8:25 AM; 39 points) (EA Forum;
- AXRP Episode 12 - AI Existential Risk with Paul Christiano by Dec 2, 2021, 2:20 AM; 38 points) (
- Grokking “Forecasting TAI with biological anchors” by Jun 6, 2022, 6:58 PM; 38 points) (
- [AN #173] Recent language model results from DeepMind by Jul 21, 2022, 2:30 AM; 37 points) (
- Some Intuitions Around Short AI Timelines Based on Recent Progress by Apr 11, 2023, 4:23 AM; 37 points) (
- When do experts think human-level AI will be created? by Jan 2, 2025, 11:17 PM; 36 points) (EA Forum;
- Biological Anchors external review by Jennifer Lin (linkpost) by Nov 30, 2022, 1:06 PM; 36 points) (EA Forum;
- Draft report on AI timelines by Dec 15, 2020, 12:10 PM; 35 points) (EA Forum;
- How should DeepMind’s Chinchilla revise our AI forecasts? by Sep 15, 2022, 5:54 PM; 35 points) (
- AI X-Risk: Integrating on the Shoulders of Giants by Nov 1, 2022, 4:07 PM; 34 points) (EA Forum;
- The shape of AGI: Cartoons and back of envelope by Jul 17, 2023, 8:57 PM; 33 points) (
- Views on when AGI comes and on strategy to reduce existential risk by Jul 8, 2023, 9:00 AM; 31 points) (EA Forum;
- Technical AI Safety Research Landscape [Slides] by Sep 18, 2023, 1:56 PM; 30 points) (EA Forum;
- Operationalizing timelines by Mar 10, 2023, 5:30 PM; 30 points) (EA Forum;
- Forecasting Newsletter: August 2022. by Sep 10, 2022, 8:59 AM; 29 points) (EA Forum;
- What are red flags for Neural Network suffering? by Nov 8, 2021, 12:51 PM; 29 points) (
- Mar 29, 2021, 11:29 AM; 28 points) 's comment on Max_Daniel’s Quick takes by (EA Forum;
- [AN #121]: Forecasting transformative AI timelines using biological anchors by Oct 14, 2020, 5:20 PM; 28 points) (
- [AN #160]: Building AIs that learn and think like people by Aug 13, 2021, 5:10 PM; 28 points) (
- [AN #141]: The case for practicing alignment work on GPT-3 and other large models by Mar 10, 2021, 6:30 PM; 27 points) (
- What is Compute? - Transformative AI and Compute [1/4] by Sep 23, 2021, 4:25 PM; 27 points) (
- Poll: Which variables are most strategically relevant? by Jan 22, 2021, 5:17 PM; 26 points) (
- My attempt at explaining the case for AI risk in a straightforward way by Mar 25, 2023, 4:32 PM; 25 points) (EA Forum;
- AXRP Episode 13 - First Principles of AGI Safety with Richard Ngo by Mar 31, 2022, 5:20 AM; 25 points) (
- How do scaling laws work for fine-tuning? by Apr 4, 2021, 12:18 PM; 24 points) (
- [AN #148]: Analyzing generalization across more axes than just accuracy or loss by Apr 28, 2021, 6:30 PM; 24 points) (
- AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by Jan 29, 2021, 11:05 PM; 23 points) (
- Analogy Bank for AI Safety by Jan 29, 2024, 2:35 AM; 23 points) (
- The effect of horizon length on scaling laws by Feb 1, 2023, 3:59 AM; 23 points) (
- How Roodman’s GWP model translates to TAI timelines by Nov 16, 2020, 2:11 PM; 22 points) (EA Forum;
- How Roodman’s GWP model translates to TAI timelines by Nov 16, 2020, 2:05 PM; 22 points) (
- Do anthropic considerations undercut the evolution anchor from the Bio Anchors report? by Oct 1, 2022, 8:02 PM; 22 points) (
- The Human-AI Reflective Equilibrium by Jan 24, 2023, 1:32 AM; 22 points) (
- AI Risk Intro 2: Solving The Problem by Sep 22, 2022, 1:55 PM; 22 points) (
- [AN #152]: How we’ve overestimated few-shot learning capabilities by Jun 16, 2021, 5:20 PM; 22 points) (
- Neural net / decision tree hybrids: a potential path toward bridging the interpretability gap by Sep 23, 2021, 12:38 AM; 21 points) (
- Simplified bio-anchors for upper bounds on AI timelines by Jul 15, 2023, 6:15 PM; 21 points) (
- [AN #136]: How well will GPT-N perform on downstream tasks? by Feb 3, 2021, 6:10 PM; 21 points) (
- Forecasting Newsletter: September 2020. by Oct 1, 2020, 11:00 AM; 21 points) (
- Sep 30, 2023, 8:30 PM; 20 points) 's comment on Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest by (EA Forum;
- The Most Important Century: Sequence Introduction by Sep 3, 2021, 8:10 AM; 19 points) (EA Forum;
- Report on Frontier Model Training by Aug 30, 2023, 8:04 PM; 19 points) (EA Forum;
- Scaling Laws and Likely Limits to AI by Aug 18, 2024, 5:19 PM; 19 points) (EA Forum;
- A Guide to Forecasting AI Science Capabilities by Apr 29, 2023, 6:51 AM; 19 points) (EA Forum;
- Scaling Laws and Likely Limits to AI by Aug 18, 2024, 5:19 PM; 19 points) (
- [AN #145]: Our three year anniversary! by Apr 9, 2021, 5:48 PM; 19 points) (
- [AN #132]: Complex and subtly incorrect arguments as an obstacle to debate by Jan 6, 2021, 6:20 PM; 19 points) (
- Is this a good way to bet on short timelines? by Nov 28, 2020, 2:31 PM; 17 points) (EA Forum;
- Forecasting Compute—Transformative AI and Compute [2/4] by Oct 2, 2021, 3:54 PM; 17 points) (
- EA & LW Forum Weekly Summary (23rd − 29th Jan ’23) by Jan 31, 2023, 12:36 AM; 16 points) (EA Forum;
- Motivations, Natural Selection, and Curriculum Engineering by Dec 16, 2021, 1:07 AM; 16 points) (
- Feb 24, 2025, 2:12 PM; 16 points) 's comment on How might we safely pass the buck to AI? by (
- Pop Culture Alignment Research and Taxes by Apr 16, 2022, 3:45 PM; 16 points) (
- Could Advanced AI Drive Explosive Economic Growth? by Jun 30, 2021, 10:17 PM; 15 points) (
- Analogy Bank for AI Safety by Jan 29, 2024, 2:35 AM; 14 points) (EA Forum;
- Why I think strong general AI is coming soon by Sep 28, 2022, 6:55 AM; 14 points) (EA Forum;
- Ajeya Cotra on worldview diversification and how big the future could be by Jan 18, 2021, 8:35 AM; 14 points) (EA Forum;
- Transformative AI and Compute [Summary] by Sep 26, 2021, 11:41 AM; 14 points) (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by Dec 11, 2021, 1:19 AM; 14 points) (
- Compute Governance: The Role of Commodity Hardware by Mar 26, 2022, 10:08 AM; 14 points) (
- Jun 25, 2022, 7:21 PM; 13 points) 's comment on On Deference and Yudkowsky’s AI Risk Estimates by (EA Forum;
- Oct 3, 2020, 6:03 PM; 13 points) 's comment on Feedback Request on EA Philippines’ Career Advice Research for Technical AI Safety by (EA Forum;
- [AN #134]: Underspecification as a cause of fragility to distribution shift by Jan 21, 2021, 6:10 PM; 13 points) (
- Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by Nov 17, 2020, 3:05 PM; 12 points) (EA Forum;
- What role should evolutionary analogies play in understanding AI takeoff speeds? by Dec 11, 2021, 1:16 AM; 12 points) (EA Forum;
- Sep 22, 2021, 1:00 PM; 12 points) 's comment on Why AI alignment could be hard with modern deep learning by (EA Forum;
- EA & LW Forum Weekly Summary (23rd − 29th Jan ’23) by Jan 31, 2023, 12:36 AM; 12 points) (
- [AN #154]: What economic growth theory has to say about transformative AI by Jun 30, 2021, 5:20 PM; 12 points) (
- Jun 27, 2021, 3:11 PM; 12 points) 's comment on Parameter counts in Machine Learning by (
- Anthropic Effects in Estimating Evolution Difficulty by Jul 5, 2021, 4:02 AM; 12 points) (
- Summary: Existential risk from power-seeking AI by Joseph Carlsmith by Oct 28, 2023, 3:05 PM; 11 points) (EA Forum;
- AI Risk Intro 2: Solving The Problem by Sep 24, 2022, 9:33 AM; 11 points) (EA Forum;
- We Have Not Been Invited to the Future: e/acc and the Narrowness of the Way Ahead by Jul 17, 2024, 10:15 PM; 10 points) (EA Forum;
- When do experts think human-level AI will be created? by Dec 30, 2024, 6:20 AM; 10 points) (
- AGI as a Black Swan Event by Dec 4, 2022, 11:00 PM; 8 points) (
- Sep 15, 2021, 8:12 AM; 7 points) 's comment on The motivated reasoning critique of effective altruism by (EA Forum;
- Operationalizing timelines by Mar 10, 2023, 4:30 PM; 7 points) (
- Countering arguments against working on AI safety by Jul 20, 2022, 6:23 PM; 7 points) (
- A Guide to Forecasting AI Science Capabilities by Apr 29, 2023, 11:24 PM; 6 points) (
- Dec 7, 2020, 10:45 PM; 6 points) 's comment on Cultural accumulation by (
- AGI as a Black Swan Event by Dec 4, 2022, 11:35 PM; 5 points) (EA Forum;
- Sep 15, 2022, 11:42 PM; 5 points) 's comment on [Linkpost] Dan Luu: Futurist prediction methods and accuracy by (EA Forum;
- Oct 7, 2021, 9:23 PM; 5 points) 's comment on We’re Redwood Research, we do applied alignment research, AMA by (EA Forum;
- Jun 10, 2022, 8:19 PM; 5 points) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- Emerging Technologies: More to explore by Jan 1, 2021, 11:06 AM; 4 points) (EA Forum;
- Oct 14, 2020, 3:47 PM; 4 points) 's comment on A prior for technological discontinuities by (
- Jun 28, 2021, 7:05 PM; 4 points) 's comment on Parameter counts in Machine Learning by (
- Oct 14, 2024, 5:46 PM; 4 points) 's comment on An AI Race With China Can Be Better Than Not Racing by (
- May 30, 2022, 5:07 PM; 3 points) 's comment on We should expect to worry more about speculative risks by (EA Forum;
- Jun 29, 2024, 3:55 AM; 3 points) 's comment on On the Dwarkesh/Chollet Podcast, and the cruxes of scaling to AGI by (EA Forum;
- Mar 31, 2022, 11:22 PM; 3 points) 's comment on A comment on Ajeya Cotra’s draft report on AI timelines by (
- AI Safety 101 : AGI by Dec 21, 2023, 2:18 PM; 2 points) (EA Forum;
- Nov 17, 2020, 3:16 PM; 2 points) 's comment on Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by (EA Forum;
- Apr 14, 2022, 8:48 AM; 2 points) 's comment on What more compute does for brain-like models: response to Rohin by (
- Apr 13, 2021, 12:51 PM; 2 points) 's comment on A Brief Review of Current and Near-Future Methods of Genetic Engineering by (
- Sep 28, 2022, 7:48 PM; 1 point) 's comment on The missing link to AGI by (EA Forum;
- Nov 17, 2020, 3:11 PM; 1 point) 's comment on Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by (EA Forum;
- May 28, 2022, 6:53 AM; 1 point) 's comment on [$20K in Prizes] AI Safety Arguments Competition by (
- Factoring P(doom) into a bayesian network by Oct 17, 2024, 5:55 PM; 1 point) (
- Jul 24, 2022, 5:25 AM; 1 point) 's comment on Personal forecasting retrospective: 2020-2022 by (
- Dec 20, 2021, 10:21 PM; 1 point) 's comment on Moore’s Law, AI, and the pace of progress by (
- Limiting factors to predict AI take-off speed by May 31, 2023, 11:19 PM; 1 point) (
Ajeya’s timelines report is the best thing that’s ever been written about AI timelines imo. Whenever people ask me for my views on timelines, I go through the following mini-flowchart:
1. Have you read Ajeya’s report?
--If yes, launch into a conversation about the distribution over 2020′s training compute and explain why I think the distribution should be substantially to the left, why I worry it might shift leftward faster than she projects, and why I think we should use it to forecast AI-PONR instead of TAI.
--If no, launch into a conversation about Ajeya’s framework and why it’s the best and why all discussion of AI timelines should begin there.
So, why do I think it’s the best? Well, there’s a lot to say on the subject, but, in a nutshell: Ajeya’s framework is to AI forecasting what actual climate models are to climate change forecasting (by contrast with lower-tier methods such as “Just look at the time series of temperature over time / AI performance over time and extrapolate” and “Make a list of factors that might push the temperature up or down in the future / make AI progress harder or easier,” and of course the classic “poll a bunch of people with vaguely related credentials.”
There’s something else which is harder to convey… I want to say Ajeya’s model doesn’t actually assume anything, or maybe it makes only a few very plausible assumptions. This is underappreciated, I think. People will say e.g. “I think data is the bottleneck, not compute.” But Ajeya’s model doesn’t assume otherwise! If you think data is the bottleneck, then the model is more difficult for you to use and will give more boring outputs, but you can still use it. (Concretely, you’d have 2020′s training compute requirements distribution with lots of probability mass way to the right, and then rather than say the distribution shifts to the left at a rate of about one OOM a decade, you’d input whatever trend you think characterizes the likely improvements in data gathering.)
The upshot of this is that I think a lot of people are making a mistake when they treat Ajeya’s framework as just another model to foxily aggregate over. “When I think through Ajeya’s model, I get X timelines, but then when I extrapolate out GWP trends I get Y timelines, so I’m going to go with (X+Y)/2.” I think instead everyone’s timelines should be derived from variations on Ajeya’s model, with extensions to account for things deemed important (like data collection progress) and tweaks upwards or downwards to account for the rest of the stuff not modelled.