It’s been over two and a half years since Paul put this blog post on takeoff speeds online. In particular, it argues that the “fast takeoff” undergone by humans is not very strong evidence that AIs will also undergo a fast takeoff, because evolution wasn’t “optimising for” humans taking over the world.
I think this argument has been fairly influential—possibly disproportionately influential, given its brevity. I find it moderately persuasive, but not entirely so, and I’m currently working on a post explaining why. What I’m wondering is: have there been other critiques or responses to this argument? Because it currently seems to me like there’s been very little public engagement with it.
No.
There was “My Thoughts on Takeoff Speeds” by tristanm.
Thanks for asking, I just read the post and was also interested in other people’s thoughts.
My thoughts while reading:
Is the emergence of humans really a good example for a significantly discontinuous jump? I spontaneously imagined that the first humans weren’t actually performing much better than other apes, and that it took a lot of time of cultural development before humans started clearly dominating via using their increased strategizing/planning/coordinating capabilities.
Paul seemed unconvinced of the potential for major insights (or a “secret sauce”) about how to design discontinuously superior AIs. He wondered about analogous examples were major insights led to significant technological advances. This probably is covered well by the AI Impacts project on discontinuous technological developments, which found 10 relatively clear instances, and e.g. the bridge length discontinuity was “based on a new theory of bridge design”.
Regarding his argument why recursive self-improvement doesn’t lead to fast takeoff: “Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement.”I had the thought that there might be a “capability overhang” regarding self-improvement, because ML might currently underrate the progress that can be had here and rather spends time on other applications. I personally also find it plausible that a stable recursively self-improving architecture might be a candidate for a major insight that somebody might have someday.
The AISafety.com Reading Group discussed this blog post when it was posted. There is a fair bit of commentary here: https://youtu.be/7ogJuXNmAIw