it would need 271/0.153=(2×109)× that of compute. Assuming that GPT-4 cost 10 million USD to train, this hypothetical AI would cost 2×1016 USD, or 200 years of global GDP2023.
This implies that the first AGI will not be a scaled-up GPT—autoregressive transformer generatively pretrained on a lightly filtered text dataset. It has to include something else, perhaps multimodal data, high-quality data, better architecture, etc. Even if we were to attempt to merely scale it up, turning earth into a GPT-factory,[6] with even 50% of global GDP devoted,[7] and with 2% growth rate forever, it would still take 110 years,[8] arriving at year 2133. Whole brain emulation would likely take less time
I don’t think this is the correct response to thinking you need 9 OOMs more compute. 9 OOMs is a ton, and also, we’ve been getting an OOM every few years, excluding OOMs that come from $ investment. I think if you believe we would need 9 OOMs more than GPT-4, this is a substantial update away from expecting LLMs to scale to AGI in the next say 8 years, but it’s not a strong update against 10-40 year timelines.
I think it’s easy to look at large number like 9 OOMs and say things like “but that requires substantially more energy than all of humanity produces each year — no way!” (a thing I have said before in a similar context). But this thinking ignores the dropping cost of compute and strong trends going on now. Building AGI with 2024 hardware might be pretty hard, but we won’t be stuck with 2024 hardware for long.
I don’t think this is the correct response to thinking you need 9 OOMs more compute. 9 OOMs is a ton, and also, we’ve been getting an OOM every few years, excluding OOMs that come from $ investment. I think if you believe we would need 9 OOMs more than GPT-4, this is a substantial update away from expecting LLMs to scale to AGI in the next say 8 years, but it’s not a strong update against 10-40 year timelines.
I think it’s easy to look at large number like 9 OOMs and say things like “but that requires substantially more energy than all of humanity produces each year — no way!” (a thing I have said before in a similar context). But this thinking ignores the dropping cost of compute and strong trends going on now. Building AGI with 2024 hardware might be pretty hard, but we won’t be stuck with 2024 hardware for long.