[Draft for commenting] Near-Term AI risks predictions
“Predictions of the Near-Term Global Catastrophic Risks of Artificial Intelligence”
Abstract: In this article, we explore risks of the appearance of dangerous AI in the near (0–5 years) and medium term (5–15 years). Polls show that around 10 percent of the probability weight is given to early appearance of artificial general intelligence (AGI) in the next 15 years. Neural net performance and other characteristics, like the number of “neurons”, are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in 4–6 years, around 2022–24. The performance of the hardware is accelerating, thanks to advances in graphic processing units and use of many chips in one processing unit, which have helped to overcome the limits of Moore’s law. Alternate extrapolations of the technological development produce similar results. AI will become dangerous when it reaches ability to solve the “computational complexity of omnicide”, or will be able to create self-improving AI. The appearance of near-human AI will strongly accelerate the speed of AI development, and as a result, some form of superintelligent AI may appear before 2030.
Highlights:
• Median timing of AI prediction is the wrong measure to use in AI risk assessment.
• Dangerous AI level is defined through AI’s ability to facilitate a global catastrophe and it could happen before AGI.
• The growth rate of hardware performance for AI applications has accelerated since 2016 and Moore’s law will provide enough computational power for AGI in near term.
• Main measures of neural nets performance have been doubling every one year in the last five years since 2012 and if this trend continues, will reach human level in 2022.
• Several independent methods predict near-human-level AI after 2022 and a “singularity” around 2030.
Full text open for commenting here: https://goo.gl/6DyTJG
I think it’s great that you and other people are investing time and thought into writing articles like these.
I also think it’s great that you’re soliciting early feedback to help improve the work.
I left some comments that I hope you find helpful.
Thanks for comments, I will incorporate them today. I also have a question to you and other readers: Maybe the article has to have more catching title? Something like: ” Dangerous AI timing: after 2022 but before 2030″?
Generally yes, I think it’s better when titles reveal the answer rather than the question alone. “Dangerous AI timing” sounds a bit awkward to my ear. Maybe a title like “Catastrophically dangerous AI is plausible before 2030″ would work.
Yes, good point.
pro essays wrote that AI used to be basically a set of rules, like If then else, now it “learns” by looking at tons of data and analyzing it, recording how the data relates and the results were, then using that info to analyze new data and compare to previous.
:)