Paul Christiano provided a picture of non-Singularity doom in What Failure Looks Like. In general there is a pretty wide range of opinions on questions about this sort of thing—the AI-Foom debate between Eliezer Yudkowsky and Robin Hanson is a famous example, though an old one.
“Takoff speed” is a common term used to refer to questions about the rate of change in AI capabilities at the human and superhuman level of general intelligence—searching Lesswrong or the Alignment Forum for that phrase will turn up a lot of discussion about these questions, though I don’t know of the best introduction offhand (hopefully someone else here has suggestions?).
Paul Christiano provided a picture of non-Singularity doom in What Failure Looks Like. In general there is a pretty wide range of opinions on questions about this sort of thing—the AI-Foom debate between Eliezer Yudkowsky and Robin Hanson is a famous example, though an old one.
“Takoff speed” is a common term used to refer to questions about the rate of change in AI capabilities at the human and superhuman level of general intelligence—searching Lesswrong or the Alignment Forum for that phrase will turn up a lot of discussion about these questions, though I don’t know of the best introduction offhand (hopefully someone else here has suggestions?).