“AI and Compute” trend isn’t predictive of what is happening
(open in a new tab to view at higher resolution)
In May 2018 (almost 3 years ago) OpenAI published their “AI and Compute” blogpost where they highlighted the trend of increasing compute spending on training the largest AI models and speculated that the trend might continue into the future. This note is aimed to show that the trend has ended right around the moment of OpenAI publishing their post and doesn’t hold up anymore.
On the above image, I superimposed the scatter plot from OpenAI blogpost and my estimates of compute required for some recent large and ambitious ML experiments. To the best of my knowledge (and I have tried to check for this), there haven’t been any experiments that required more compute than those shown on the plot.
The main thing shown here is that less than one doubling of computational resources for the largest training occured in the 3-year period between 2018 and 2021, compared to around 10 doublings in the 3-year period between 2015 and 2018. This seems to correspond to a severe slowdown of computational scaling.
To stay on the trend line, we currently would need an experiment requiring roughly around 100 times more compute than GPT-3. Considering that GPT-3 may have costed between $5M and $12M and accelerators haven’t vastly improved since then, such an experiment would now likely cost $0.2B - $1.5B.
- Compute Trends Across Three eras of Machine Learning by 16 Feb 2022 14:18 UTC; 94 points) (
- CLR’s Annual Report 2021 by 26 Feb 2022 12:47 UTC; 79 points) (EA Forum;
- Two contrasting models of “intelligence” and future growth by 24 Nov 2022 11:54 UTC; 74 points) (EA Forum;
- Paths To High-Level Machine Intelligence by 10 Sep 2021 13:21 UTC; 68 points) (
- Transformative AI and Compute [Summary] by 23 Sep 2021 13:53 UTC; 60 points) (EA Forum;
- Fractional progress estimates for AI timelines and implied resource requirements by 15 Jul 2021 18:43 UTC; 55 points) (
- What is Compute? - Transformative AI and Compute [1/4] by 23 Sep 2021 13:54 UTC; 48 points) (EA Forum;
- Grokking “Forecasting TAI with biological anchors” by 6 Jun 2022 18:56 UTC; 43 points) (EA Forum;
- Grokking “Forecasting TAI with biological anchors” by 6 Jun 2022 18:58 UTC; 38 points) (
- EA Organization Updates: April 2021 by 28 Apr 2021 10:52 UTC; 32 points) (EA Forum;
- Parameter count of ML systems through time? by 19 Apr 2021 12:54 UTC; 31 points) (
- What is Compute? - Transformative AI and Compute [1/4] by 23 Sep 2021 16:25 UTC; 27 points) (
- Compute Trends — Comparison to OpenAI’s AI and Compute by 12 Mar 2022 18:09 UTC; 23 points) (
- Transformative AI and Compute [Summary] by 26 Sep 2021 11:41 UTC; 13 points) (
- 18 Apr 2021 13:30 UTC; 6 points) 's comment on Fun with +12 OOMs of Compute by (
My calculation for AlphaStar: 12 agents * 44 days * 24 hours/day * 3600 sec/hour * 420*10^12 FLOP/s * 32 TPUv3 boards * 33% actual board utilization = 2.02 * 10^23 FLOP which is about the same as AlphaGo Zero compute.
For 600B GShard MoE model: 22 TPU core-years = 22 years * 365 days/year * 24 hours/day * 3600 sec/hour * 420*10^12 FLOP/s/TPUv3 board * 0.25 TPU boards / TPU core * 0.33 actual board utilization = 2.4 * 10^21 FLOP.
For 2.3B GShard dense transformer: 235.5 TPU core-years = 2.6 * 10^22 FLOP.
Meena was trained for 30 days on a TPUv3 pod with 2048 cores. So it’s 30 days * 24 hours/day * 3600 sec/hour * 2048 TPUv3 cores * 0.25 TPU boards / TPU core * 420*10^12 FLOP/s/TPUv3 board * 33% actual board utilization = 1.8 * 10^23 FLOP, slightly below AlphaGo Zero.
Image GPT: “iGPT-L was trained for roughly 2500 V100-days”—this means 2500 days * 24 hours/day * 3600 sec/hour * 100*10^12 * 33% actual board utilization = 6.5 * 10^9 * 10^12 = 6.5 * 10^21 FLOP. There’s no compute data for the largest model, iGPT-XL. But based on the FLOP/s increase from GPT-3 XL (same num of params as iGPT-L) to GPT-3 6.7B (same num of params as iGPT-XL), I think it required 5 times more compute: 3.3 * 10^22 FLOP.
BigGAN: 2 days * 24 hours/day * 3600 sec/hour * 512 TPU cores * 0.25 TPU boards / TPU core * 420*10^12 FLOP/s/TPUv3 board * 33% actual board utilization = 3 * 10^21 FLOP.
AlphaFold: they say they trained on GPU and not TPU. Assuming V100 GPU, it’s 5 days * 24 hours/day * 3600 sec/hour * 8 V100 GPU * 100*10^12 FLOP/s * 33% actual GPU utilization = 10^20 FLOP.
A previous calculation on LW gave 2.4 x 10^24 for AlphaStar (using values from the original alphastar blog post) which suggested that the trend was roughly on track.
The differences between the 2 calculations are (your values first):
Agents: 12 vs 600
Days: 44 vs 14
TPUs: 32 vs 16
Utilisation: 33% vs 50% (I think this is just estimated in the other calculation)
Do you have a reference for the values you use?
I appreciate questioning of my calculations, thanks for checking!
This is what I think about the previous avturchin calculation: I think that may have been a misinterpretation of DeepMind blogpost. In the blogpost they say “The AlphaStar league was run for 14 days, using 16 TPUs for each agent”. But I think it might not be 16 TPU-days for each agent, it’s 16 TPU for 14/n_agent=14/600 days for each agent. And 14 days was for the whole League training where agent policies were trained consecutively. Their wording is indeed not very clear but you can look at the “Progression of Nash of AlphaStar League” pic. You can see there that, as they say, “New competitors were dynamically added to the league, by branching from existing competitors”, and that the new ones drastically outperform older ones, meaning that older ones were not continuously updated and were only randomly picked up as static opponents.
From the blogpost: “A full technical description of this work is being prepared for publication in a peer-reviewed journal”. The only publication about this is their late-2019 Nature paper linked by teradimich here which I have taken the values from. They have upgraded their algorithm and have spent more compute in a single experiment by October 2019. 12 agents refers to the number of types of agents and 600 (900 in the newer version) refers to the number of policies. About the 33% GPU utilization value—I think I’ve seen it in some ML publications and in other places for this hardware, and this seems like a reasonable estimate for all these projects, but I don’t have sources at hand.
Probably that:
This can be useful:
Correction: AlphaStar used 6*10^22 FLOP, not 2*10^23. You have mixed up TPU chips and TPU boards.
What is the GShard dense transformer you are referring to in this post?
It should be referenced here in Figure 1: https://arxiv.org/pdf/2006.16668.pdf
gwern has recently remarked that one cause of this is supply and demand disruptions and this may be a temporary phenomenon in principle.
One more question: for the BigGAN which model do your calculations refer to?
Could it be the 256x256 deep version?
Ohh OK I think since I wrote “512 TPU cores” it’s 512x512, because in Appendix C here https://arxiv.org/pdf/1809.11096.pdf they say it corresponds to 512x512.
Deep or shallow version?
“Training takes between 24 and 48 hours for most models”; I assumed both are trained within 48 hours (even though this is not precise and may be incorrect).
Assuming that prices haven’t improved, what money has someone made to pay for the first 5-12 million dollar tab?
For AI to take off it has to pay for itself. It very likely will but this requires deployment of highly profitable, working applications.
You mean GPT-3? Are you asking whether it’s made enough money to pay for itself yet?
My bigger point is that there is finite R&D money and massive single models need to have a purpose. That going even bigger needs to accomplish something you need to do for revenue or for an experiment where larger size is meaningful.