Draft report on AI timelines
Hi all, I’ve been working on some AI forecasting research and have prepared a draft report on timelines to transformative AI. I would love feedback from this community, so I’ve made the report viewable in a Google Drive folder here.
With that said, most of my focus so far has been on the high-level structure of the framework, so the particular quantitative estimates are very much in flux and many input parameters aren’t pinned down well—I wrote the bulk of this report before July and have received feedback since then that I haven’t fully incorporated yet. I’d prefer if people didn’t share it widely in a low-bandwidth way (e.g., just posting key graphics on Facebook or Twitter) since the conclusions don’t reflect Open Phil’s “institutional view” yet, and there may well be some errors in the report.
The report includes a quantitative model written in Python. Ought has worked with me to integrate their forecasting platform Elicit into the model so that you can see other people’s forecasts for various parameters. If you have questions or feedback about the Elicit integration, feel free to reach out to elicit@ought.org.
Looking forward to hearing people’s thoughts!
- “Long-Termism” vs. “Existential Risk” by 6 Apr 2022 21:41 UTC; 519 points) (EA Forum;
- My take on What We Owe the Future by 1 Sep 2022 18:07 UTC; 353 points) (EA Forum;
- Why I think strong general AI is coming soon by 28 Sep 2022 5:40 UTC; 335 points) (
- Important, actionable research questions for the most important century by 24 Feb 2022 16:34 UTC; 298 points) (EA Forum;
- Two-year update on my personal AI timelines by 2 Aug 2022 23:07 UTC; 293 points) (
- AI Timelines by 10 Nov 2023 5:28 UTC; 279 points) (
- Announcing the Future Fund’s AI Worldview Prize by 23 Sep 2022 16:28 UTC; 255 points) (EA Forum;
- What a compute-centric framework says about AI takeoff speeds by 23 Jan 2023 4:09 UTC; 189 points) (EA Forum;
- What a compute-centric framework says about AI takeoff speeds by 23 Jan 2023 4:02 UTC; 187 points) (
- What will GPT-2030 look like? by 7 Jun 2023 23:40 UTC; 185 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (EA Forum;
- A personal take on longtermist AI governance by 16 Jul 2021 22:08 UTC; 173 points) (EA Forum;
- AI Could Defeat All Of Us Combined by 9 Jun 2022 15:50 UTC; 170 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:25 UTC; 155 points) (EA Forum;
- Your posts should be on arXiv by 25 Aug 2022 10:35 UTC; 155 points) (
- Reply to Eliezer on Biological Anchors by 23 Dec 2021 16:15 UTC; 149 points) (
- AI Could Defeat All Of Us Combined by 10 Jun 2022 23:25 UTC; 143 points) (EA Forum;
- Clarifying and predicting AGI by 4 May 2023 15:55 UTC; 141 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:27 UTC; 137 points) (
- Report on Frontier Model Training by 30 Aug 2023 20:02 UTC; 122 points) (
- Policy ideas for mitigating AI risk by 16 Sep 2023 10:31 UTC; 121 points) (EA Forum;
- A proposed method for forecasting transformative AI by 10 Feb 2023 19:34 UTC; 121 points) (
- Extrapolating GPT-N performance by 18 Dec 2020 21:41 UTC; 110 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Features that make a report especially helpful to me by 12 Apr 2022 13:57 UTC; 107 points) (EA Forum;
- Views on when AGI comes and on strategy to reduce existential risk by 8 Jul 2023 9:00 UTC; 103 points) (
- A compute-based framework for thinking about the future of AI by 31 May 2023 22:00 UTC; 96 points) (EA Forum;
- Prizes for the 2020 Review by 20 Feb 2022 21:07 UTC; 94 points) (
- AI timelines by bio anchors: the debate in one place by 30 Jul 2022 23:04 UTC; 93 points) (EA Forum;
- Biological Anchors: The Trick that Might or Might Not Work by 12 Aug 2023 0:53 UTC; 91 points) (
- “Fixing Adolescence” as a Cause Area? by 23 Jan 2022 10:57 UTC; 89 points) (EA Forum;
- Tips for conducting worldview investigations by 12 Apr 2022 19:28 UTC; 88 points) (EA Forum;
- New blog: Cold Takes by 13 Jul 2021 17:14 UTC; 85 points) (EA Forum;
- AMA: Ajeya Cotra, researcher at Open Phil by 28 Jan 2021 17:38 UTC; 84 points) (EA Forum;
- Progress studies vs. longtermist EA: some differences by 31 May 2021 21:35 UTC; 84 points) (EA Forum;
- More Recent Progress in the Theory of Neural Networks by 6 Oct 2022 16:57 UTC; 82 points) (
- A Brief Review of Current and Near-Future Methods of Genetic Engineering by 10 Apr 2021 19:16 UTC; 81 points) (
- 2020 Review Article by 14 Jan 2022 4:58 UTC; 74 points) (
- A comment on Ajeya Cotra’s draft report on AI timelines by 24 Feb 2022 0:41 UTC; 74 points) (
- Roodman’s Thoughts on Biological Anchors by 14 Sep 2022 12:23 UTC; 73 points) (EA Forum;
- Clarifying and predicting AGI by 4 May 2023 15:56 UTC; 69 points) (EA Forum;
- The Most Important Century: Sequence Introduction by 3 Sep 2021 20:19 UTC; 69 points) (
- Paths To High-Level Machine Intelligence by 10 Sep 2021 13:21 UTC; 68 points) (
- Aptitudes for AI governance work by 13 Jun 2023 13:54 UTC; 67 points) (EA Forum;
- Brain-inspired AGI and the “lifetime anchor” by 29 Sep 2021 13:09 UTC; 65 points) (
- How does bee learning compare with machine learning? by 4 Mar 2021 1:59 UTC; 64 points) (
- Report on Whether AI Could Drive Explosive Economic Growth by 25 Jun 2021 23:02 UTC; 63 points) (EA Forum;
- Report on Semi-informative Priors for AI timelines (Open Philanthropy) by 26 Mar 2021 17:46 UTC; 62 points) (EA Forum;
- Forecasting progress in language models by 28 Oct 2021 20:40 UTC; 62 points) (
- $1,000 Squiggle Experimentation Challenge by 4 Aug 2022 14:20 UTC; 61 points) (EA Forum;
- Transformative AI and Compute [Summary] by 23 Sep 2021 13:53 UTC; 60 points) (EA Forum;
- Ajeya’s TAI timeline shortened from 2050 to 2040 by 3 Aug 2022 0:00 UTC; 59 points) (EA Forum;
- Projecting compute trends in Machine Learning by 7 Mar 2022 15:32 UTC; 59 points) (
- Analogies and General Priors on Intelligence by 20 Aug 2021 21:03 UTC; 57 points) (
- Forecasting Transformative AI: Are we “trending toward” transformative AI? (How would we know?) by 24 Aug 2021 17:15 UTC; 56 points) (EA Forum;
- Fractional progress estimates for AI timelines and implied resource requirements by 15 Jul 2021 18:43 UTC; 55 points) (
- [AN #166]: Is it crazy to claim we’re in the most important century? by 8 Oct 2021 17:30 UTC; 52 points) (
- We need a standard set of community advice for how to financially prepare for AGI by 7 Jun 2021 7:24 UTC; 51 points) (
- Forecasting transformative AI: the “biological anchors” method in a nutshell by 31 Aug 2021 18:17 UTC; 50 points) (EA Forum;
- Dan Luu on Futurist Predictions by 14 Sep 2022 3:01 UTC; 50 points) (
- What is Compute? - Transformative AI and Compute [1/4] by 23 Sep 2021 13:54 UTC; 48 points) (EA Forum;
- AI Safety 101 : Capabilities—Human Level AI, What? How? and When? by 7 Mar 2024 17:29 UTC; 46 points) (
- [AN #156]: The scaling hypothesis: a plan for building AGI by 16 Jul 2021 17:10 UTC; 46 points) (
- A review of the Bio-Anchors report by 3 Oct 2022 10:27 UTC; 45 points) (
- A Model-based Approach to AI Existential Risk by 25 Aug 2023 10:32 UTC; 45 points) (
- Grokking “Forecasting TAI with biological anchors” by 6 Jun 2022 18:56 UTC; 43 points) (EA Forum;
- Expected impact of a career in AI safety under different opinions by 14 Jun 2022 14:25 UTC; 42 points) (EA Forum;
- Forecasting Newsletter: September 2020. by 1 Oct 2020 11:00 UTC; 41 points) (EA Forum;
- Carl Shulman on the common-sense case for existential risk work and its practical implications by 8 Oct 2021 13:43 UTC; 41 points) (EA Forum;
- Superintelligent AI is possible in the 2020s by 13 Aug 2024 6:03 UTC; 41 points) (
- Technical AI Safety Research Landscape [Slides] by 18 Sep 2023 13:56 UTC; 41 points) (
- 28 Apr 2021 14:04 UTC; 40 points) 's comment on Why AI is Harder Than We Think—Melanie Mitchell by (EA Forum;
- Features that make a report especially helpful to me by 14 Apr 2022 1:12 UTC; 40 points) (
- Forecasting Compute—Transformative AI and Compute [2/4] by 1 Oct 2021 8:25 UTC; 39 points) (EA Forum;
- AXRP Episode 12 - AI Existential Risk with Paul Christiano by 2 Dec 2021 2:20 UTC; 38 points) (
- Grokking “Forecasting TAI with biological anchors” by 6 Jun 2022 18:58 UTC; 38 points) (
- [AN #173] Recent language model results from DeepMind by 21 Jul 2022 2:30 UTC; 37 points) (
- Some Intuitions Around Short AI Timelines Based on Recent Progress by 11 Apr 2023 4:23 UTC; 37 points) (
- Biological Anchors external review by Jennifer Lin (linkpost) by 30 Nov 2022 13:06 UTC; 36 points) (EA Forum;
- Draft report on AI timelines by 15 Dec 2020 12:10 UTC; 35 points) (EA Forum;
- How should DeepMind’s Chinchilla revise our AI forecasts? by 15 Sep 2022 17:54 UTC; 35 points) (
- AI X-Risk: Integrating on the Shoulders of Giants by 1 Nov 2022 16:07 UTC; 34 points) (EA Forum;
- The shape of AGI: Cartoons and back of envelope by 17 Jul 2023 20:57 UTC; 33 points) (
- Views on when AGI comes and on strategy to reduce existential risk by 8 Jul 2023 9:00 UTC; 31 points) (EA Forum;
- Operationalizing timelines by 10 Mar 2023 17:30 UTC; 30 points) (EA Forum;
- Forecasting Newsletter: August 2022. by 10 Sep 2022 8:59 UTC; 29 points) (EA Forum;
- Technical AI Safety Research Landscape [Slides] by 18 Sep 2023 13:56 UTC; 29 points) (EA Forum;
- What are red flags for Neural Network suffering? by 8 Nov 2021 12:51 UTC; 29 points) (
- 29 Mar 2021 11:29 UTC; 28 points) 's comment on Max_Daniel’s Quick takes by (EA Forum;
- [AN #121]: Forecasting transformative AI timelines using biological anchors by 14 Oct 2020 17:20 UTC; 28 points) (
- [AN #160]: Building AIs that learn and think like people by 13 Aug 2021 17:10 UTC; 28 points) (
- [AN #141]: The case for practicing alignment work on GPT-3 and other large models by 10 Mar 2021 18:30 UTC; 27 points) (
- What is Compute? - Transformative AI and Compute [1/4] by 23 Sep 2021 16:25 UTC; 27 points) (
- Poll: Which variables are most strategically relevant? by 22 Jan 2021 17:17 UTC; 26 points) (
- My attempt at explaining the case for AI risk in a straightforward way by 25 Mar 2023 16:32 UTC; 25 points) (EA Forum;
- How do scaling laws work for fine-tuning? by 4 Apr 2021 12:18 UTC; 24 points) (
- [AN #148]: Analyzing generalization across more axes than just accuracy or loss by 28 Apr 2021 18:30 UTC; 24 points) (
- AXRP Episode 13 - First Principles of AGI Safety with Richard Ngo by 31 Mar 2022 5:20 UTC; 24 points) (
- AMA on EA Forum: Ajeya Cotra, researcher at Open Phil by 29 Jan 2021 23:05 UTC; 23 points) (
- Analogy Bank for AI Safety by 29 Jan 2024 2:35 UTC; 23 points) (
- The effect of horizon length on scaling laws by 1 Feb 2023 3:59 UTC; 23 points) (
- How Roodman’s GWP model translates to TAI timelines by 16 Nov 2020 14:11 UTC; 22 points) (EA Forum;
- How Roodman’s GWP model translates to TAI timelines by 16 Nov 2020 14:05 UTC; 22 points) (
- Do anthropic considerations undercut the evolution anchor from the Bio Anchors report? by 1 Oct 2022 20:02 UTC; 22 points) (
- The Human-AI Reflective Equilibrium by 24 Jan 2023 1:32 UTC; 22 points) (
- AI Risk Intro 2: Solving The Problem by 22 Sep 2022 13:55 UTC; 22 points) (
- [AN #152]: How we’ve overestimated few-shot learning capabilities by 16 Jun 2021 17:20 UTC; 22 points) (
- Neural net / decision tree hybrids: a potential path toward bridging the interpretability gap by 23 Sep 2021 0:38 UTC; 21 points) (
- Simplified bio-anchors for upper bounds on AI timelines by 15 Jul 2023 18:15 UTC; 21 points) (
- [AN #136]: How well will GPT-N perform on downstream tasks? by 3 Feb 2021 18:10 UTC; 21 points) (
- Forecasting Newsletter: September 2020. by 1 Oct 2020 11:00 UTC; 21 points) (
- 30 Sep 2023 20:30 UTC; 20 points) 's comment on Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest by (EA Forum;
- The Most Important Century: Sequence Introduction by 3 Sep 2021 8:10 UTC; 19 points) (EA Forum;
- Report on Frontier Model Training by 30 Aug 2023 20:04 UTC; 19 points) (EA Forum;
- A Guide to Forecasting AI Science Capabilities by 29 Apr 2023 6:51 UTC; 19 points) (EA Forum;
- [AN #145]: Our three year anniversary! by 9 Apr 2021 17:48 UTC; 19 points) (
- [AN #132]: Complex and subtly incorrect arguments as an obstacle to debate by 6 Jan 2021 18:20 UTC; 19 points) (
- Is this a good way to bet on short timelines? by 28 Nov 2020 14:31 UTC; 17 points) (EA Forum;
- Forecasting Compute—Transformative AI and Compute [2/4] by 2 Oct 2021 15:54 UTC; 17 points) (
- EA & LW Forum Weekly Summary (23rd − 29th Jan ’23) by 31 Jan 2023 0:36 UTC; 16 points) (EA Forum;
- Motivations, Natural Selection, and Curriculum Engineering by 16 Dec 2021 1:07 UTC; 16 points) (
- Pop Culture Alignment Research and Taxes by 16 Apr 2022 15:45 UTC; 16 points) (
- Could Advanced AI Drive Explosive Economic Growth? by 30 Jun 2021 22:17 UTC; 15 points) (
- Analogy Bank for AI Safety by 29 Jan 2024 2:35 UTC; 14 points) (EA Forum;
- Why I think strong general AI is coming soon by 28 Sep 2022 6:55 UTC; 14 points) (EA Forum;
- Ajeya Cotra on worldview diversification and how big the future could be by 18 Jan 2021 8:35 UTC; 14 points) (EA Forum;
- Transformative AI and Compute [Summary] by 26 Sep 2021 11:41 UTC; 14 points) (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:19 UTC; 14 points) (
- Compute Governance: The Role of Commodity Hardware by 26 Mar 2022 10:08 UTC; 14 points) (
- 25 Jun 2022 19:21 UTC; 13 points) 's comment on On Deference and Yudkowsky’s AI Risk Estimates by (EA Forum;
- 3 Oct 2020 18:03 UTC; 13 points) 's comment on Feedback Request on EA Philippines’ Career Advice Research for Technical AI Safety by (EA Forum;
- [AN #134]: Underspecification as a cause of fragility to distribution shift by 21 Jan 2021 18:10 UTC; 13 points) (
- Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by 17 Nov 2020 15:05 UTC; 12 points) (EA Forum;
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:16 UTC; 12 points) (EA Forum;
- 22 Sep 2021 13:00 UTC; 12 points) 's comment on Why AI alignment could be hard with modern deep learning by (EA Forum;
- EA & LW Forum Weekly Summary (23rd − 29th Jan ’23) by 31 Jan 2023 0:36 UTC; 12 points) (
- [AN #154]: What economic growth theory has to say about transformative AI by 30 Jun 2021 17:20 UTC; 12 points) (
- 27 Jun 2021 15:11 UTC; 12 points) 's comment on Parameter counts in Machine Learning by (
- Anthropic Effects in Estimating Evolution Difficulty by 5 Jul 2021 4:02 UTC; 12 points) (
- Summary: Existential risk from power-seeking AI by Joseph Carlsmith by 28 Oct 2023 15:05 UTC; 11 points) (EA Forum;
- AI Risk Intro 2: Solving The Problem by 24 Sep 2022 9:33 UTC; 11 points) (EA Forum;
- We Have Not Been Invited to the Future: e/acc and the Narrowness of the Way Ahead by 17 Jul 2024 22:15 UTC; 10 points) (EA Forum;
- AGI as a Black Swan Event by 4 Dec 2022 23:00 UTC; 8 points) (
- 15 Sep 2021 8:12 UTC; 7 points) 's comment on The motivated reasoning critique of effective altruism by (EA Forum;
- Operationalizing timelines by 10 Mar 2023 16:30 UTC; 7 points) (
- Countering arguments against working on AI safety by 20 Jul 2022 18:23 UTC; 7 points) (
- A Guide to Forecasting AI Science Capabilities by 29 Apr 2023 23:24 UTC; 6 points) (
- 7 Dec 2020 22:45 UTC; 6 points) 's comment on Cultural accumulation by (
- AGI as a Black Swan Event by 4 Dec 2022 23:35 UTC; 5 points) (EA Forum;
- 15 Sep 2022 23:42 UTC; 5 points) 's comment on [Linkpost] Dan Luu: Futurist prediction methods and accuracy by (EA Forum;
- 7 Oct 2021 21:23 UTC; 5 points) 's comment on We’re Redwood Research, we do applied alignment research, AMA by (EA Forum;
- 10 Jun 2022 20:19 UTC; 5 points) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- Emerging Technologies: More to explore by 1 Jan 2021 11:06 UTC; 4 points) (EA Forum;
- 14 Oct 2020 15:47 UTC; 4 points) 's comment on A prior for technological discontinuities by (
- 28 Jun 2021 19:05 UTC; 4 points) 's comment on Parameter counts in Machine Learning by (
- 30 May 2022 17:07 UTC; 3 points) 's comment on We should expect to worry more about speculative risks by (EA Forum;
- 31 Mar 2022 23:22 UTC; 3 points) 's comment on A comment on Ajeya Cotra’s draft report on AI timelines by (
- AI Safety 101 : AGI by 21 Dec 2023 14:18 UTC; 2 points) (EA Forum;
- 17 Nov 2020 15:16 UTC; 2 points) 's comment on Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by (EA Forum;
- 14 Apr 2022 8:48 UTC; 2 points) 's comment on What more compute does for brain-like models: response to Rohin by (
- 13 Apr 2021 12:51 UTC; 2 points) 's comment on A Brief Review of Current and Near-Future Methods of Genetic Engineering by (
- 28 Sep 2022 19:48 UTC; 1 point) 's comment on The missing link to AGI by (EA Forum;
- 17 Nov 2020 15:11 UTC; 1 point) 's comment on Questions for Nick Beckstead’s fireside chat in EAGxAPAC this weekend by (EA Forum;
- 28 May 2022 6:53 UTC; 1 point) 's comment on [$20K in Prizes] AI Safety Arguments Competition by (
- Factoring P(doom) into a bayesian network by 17 Oct 2024 17:55 UTC; 1 point) (
- 24 Jul 2022 5:25 UTC; 1 point) 's comment on Personal forecasting retrospective: 2020-2022 by (
- 20 Dec 2021 22:21 UTC; 1 point) 's comment on Moore’s Law, AI, and the pace of progress by (
- 24 Dec 2022 23:00 UTC; 1 point) 's comment on Zach Stein-Perlman’s Shortform by (
- Limiting factors to predict AI take-off speed by 31 May 2023 23:19 UTC; 1 point) (
Ajeya’s timelines report is the best thing that’s ever been written about AI timelines imo. Whenever people ask me for my views on timelines, I go through the following mini-flowchart:
1. Have you read Ajeya’s report?
--If yes, launch into a conversation about the distribution over 2020′s training compute and explain why I think the distribution should be substantially to the left, why I worry it might shift leftward faster than she projects, and why I think we should use it to forecast AI-PONR instead of TAI.
--If no, launch into a conversation about Ajeya’s framework and why it’s the best and why all discussion of AI timelines should begin there.
So, why do I think it’s the best? Well, there’s a lot to say on the subject, but, in a nutshell: Ajeya’s framework is to AI forecasting what actual climate models are to climate change forecasting (by contrast with lower-tier methods such as “Just look at the time series of temperature over time / AI performance over time and extrapolate” and “Make a list of factors that might push the temperature up or down in the future / make AI progress harder or easier,” and of course the classic “poll a bunch of people with vaguely related credentials.”
There’s something else which is harder to convey… I want to say Ajeya’s model doesn’t actually assume anything, or maybe it makes only a few very plausible assumptions. This is underappreciated, I think. People will say e.g. “I think data is the bottleneck, not compute.” But Ajeya’s model doesn’t assume otherwise! If you think data is the bottleneck, then the model is more difficult for you to use and will give more boring outputs, but you can still use it. (Concretely, you’d have 2020′s training compute requirements distribution with lots of probability mass way to the right, and then rather than say the distribution shifts to the left at a rate of about one OOM a decade, you’d input whatever trend you think characterizes the likely improvements in data gathering.)
The upshot of this is that I think a lot of people are making a mistake when they treat Ajeya’s framework as just another model to foxily aggregate over. “When I think through Ajeya’s model, I get X timelines, but then when I extrapolate out GWP trends I get Y timelines, so I’m going to go with (X+Y)/2.” I think instead everyone’s timelines should be derived from variations on Ajeya’s model, with extensions to account for things deemed important (like data collection progress) and tweaks upwards or downwards to account for the rest of the stuff not modelled.