I will ask this question, is the Singularity/huge discontinuity scenario likely to happen? Because I see this as a meta-assumptionn behind all the doom scenarios, so we need to know whether the Singularity can happen and will happen.
Paul Christiano provided a picture of non-Singularity doom in What Failure Looks Like. In general there is a pretty wide range of opinions on questions about this sort of thing—the AI-Foom debate between Eliezer Yudkowsky and Robin Hanson is a famous example, though an old one.
“Takoff speed” is a common term used to refer to questions about the rate of change in AI capabilities at the human and superhuman level of general intelligence—searching Lesswrong or the Alignment Forum for that phrase will turn up a lot of discussion about these questions, though I don’t know of the best introduction offhand (hopefully someone else here has suggestions?).
Recursive self-improvement, or some other flavor of PASTA, seems essentially inevitable conditioning on not hitting hard physical limits or civilization being severely disrupted. There are Paul/EY debates about how discontinuous the capabilities jump will be, but the core idea of systems automating their own development and this leading to an accelerating feedback loop, or intelligence explosion, is conceptually solid.
There are still AI risks without the intelligence explosion, but it is a key part of the fears of the people who think we’re very doomed, as it causes the dynamic of getting only one shot at the real deal since the first system to go ‘critical’ will end up extremely capable.
I will ask this question, is the Singularity/huge discontinuity scenario likely to happen? Because I see this as a meta-assumptionn behind all the doom scenarios, so we need to know whether the Singularity can happen and will happen.
Paul Christiano provided a picture of non-Singularity doom in What Failure Looks Like. In general there is a pretty wide range of opinions on questions about this sort of thing—the AI-Foom debate between Eliezer Yudkowsky and Robin Hanson is a famous example, though an old one.
“Takoff speed” is a common term used to refer to questions about the rate of change in AI capabilities at the human and superhuman level of general intelligence—searching Lesswrong or the Alignment Forum for that phrase will turn up a lot of discussion about these questions, though I don’t know of the best introduction offhand (hopefully someone else here has suggestions?).
It’s definitely a common belief on this site. I don’t think it’s likely, I’ve written up some arguments here.
Recursive self-improvement, or some other flavor of PASTA, seems essentially inevitable conditioning on not hitting hard physical limits or civilization being severely disrupted. There are Paul/EY debates about how discontinuous the capabilities jump will be, but the core idea of systems automating their own development and this leading to an accelerating feedback loop, or intelligence explosion, is conceptually solid.
There are still AI risks without the intelligence explosion, but it is a key part of the fears of the people who think we’re very doomed, as it causes the dynamic of getting only one shot at the real deal since the first system to go ‘critical’ will end up extremely capable.
(oh, looks like I already wrote this on Stampy! That version might be better, feel free to improve the wiki.)