Recursive self-improvement, or some other flavor of PASTA, seems essentially inevitable conditioning on not hitting hard physical limits or civilization being severely disrupted. There are Paul/EY debates about how discontinuous the capabilities jump will be, but the core idea of systems automating their own development and this leading to an accelerating feedback loop, or intelligence explosion, is conceptually solid.
There are still AI risks without the intelligence explosion, but it is a key part of the fears of the people who think we’re very doomed, as it causes the dynamic of getting only one shot at the real deal since the first system to go ‘critical’ will end up extremely capable.
Recursive self-improvement, or some other flavor of PASTA, seems essentially inevitable conditioning on not hitting hard physical limits or civilization being severely disrupted. There are Paul/EY debates about how discontinuous the capabilities jump will be, but the core idea of systems automating their own development and this leading to an accelerating feedback loop, or intelligence explosion, is conceptually solid.
There are still AI risks without the intelligence explosion, but it is a key part of the fears of the people who think we’re very doomed, as it causes the dynamic of getting only one shot at the real deal since the first system to go ‘critical’ will end up extremely capable.
(oh, looks like I already wrote this on Stampy! That version might be better, feel free to improve the wiki.)