The post gives an argument for FOOM not happening right away after AGI. I think solid examples of FOOM are superintelligence that fits into modern compute, and as-low-as-human-level intelligence on nanotech-manufactured massive compute. LLMs are fast enough that if they turn AGI and get some specialized hardware from ordinary modern fabs, they can do serial theoretical research 10s to 100s times faster than humans, even if they aren’t smarter. Also, they are superhumanly erudite and have read literally everything. No need to redesign them while this is happening, unless that’s even more efficient than not doing it.
Which gives decades of AI theory and chip design in a year. Which could buy a lot of training efficiency, possibly enough for the superintelligence FOOM if that’s possible directly (in an aligned way), or at least for further acceleration of research if it’s not. That further acceleration of research gets it to nanotech, and then compute becomes many OOMs more abundant, very quickly, that’s FOOM enough even without superintelligence, though not having superintelligence by that point seems outlandish.
Gary Marcus Yann LeCun describes LLMs as “an off-ramp on the road to AGI,” and I’m inclined to agree. LLMs themselves aren’t likely to “turn AGI.” Each generation of LLMs demonstrates the same fundamental flaws, even as they get better at hiding them.
But I also completely buy the “FOOM even without superintelligence” angle, as well as the argument that they’ll speed up AI research by an unpredictable amount.
The post gives an argument for FOOM not happening right away after AGI. I think solid examples of FOOM are superintelligence that fits into modern compute, and as-low-as-human-level intelligence on nanotech-manufactured massive compute. LLMs are fast enough that if they turn AGI and get some specialized hardware from ordinary modern fabs, they can do serial theoretical research 10s to 100s times faster than humans, even if they aren’t smarter. Also, they are superhumanly erudite and have read literally everything. No need to redesign them while this is happening, unless that’s even more efficient than not doing it.
Which gives decades of AI theory and chip design in a year. Which could buy a lot of training efficiency, possibly enough for the superintelligence FOOM if that’s possible directly (in an aligned way), or at least for further acceleration of research if it’s not. That further acceleration of research gets it to nanotech, and then compute becomes many OOMs more abundant, very quickly, that’s FOOM enough even without superintelligence, though not having superintelligence by that point seems outlandish.
Gary MarcusYann LeCun describes LLMs as “an off-ramp on the road to AGI,” and I’m inclined to agree. LLMs themselves aren’t likely to “turn AGI.” Each generation of LLMs demonstrates the same fundamental flaws, even as they get better at hiding them.But I also completely buy the “FOOM even without superintelligence” angle, as well as the argument that they’ll speed up AI research by an unpredictable amount.