Edit: It was written in 2013 so it is probably best viewed as a jumping off point from which you can make further updates based on what has happened in the world since then.
In early 2013, Bostrom and Müller surveyed the one hundred top-cited living authors in AI, as ranked by Microsoft Academic Search. Conditional on “no global catastrophe halt[ing] progress,” the twenty-nine experts who responded assigned a median 10% probability to our developing a machine “that can carry out most human professions at least as well as a typical human” by the year 2023, a 50% probability by 2048, and a 90% probability by 2080.5
Most researchers at MIRI approximately agree with the 10% and 50% dates, but think that AI could arrive significantly later than 2080. This is in line with Bostrom’s analysis in Superintelligence:
My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI [human-level machine intelligence] not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.
Historically, AI researchers have not had a strong record of being able to predict the rate of advances in their own field or the shape that such advances would take. On the one hand, some tasks, like chess playing, turned out to be achievable by means of surprisingly simple programs; and naysayers who claimed that machines would “never” be able to do this or that have repeatedly been proven wrong. On the other hand, the more typical errors among practitioners have been to underestimate the difficulties of getting a system to perform robustly on real-world tasks, and to overestimate the advantages of their own particular pet project or technique.
Experts also reported a 10% median confidence that superintelligence would be developed within 2 years of human equivalence, and a 75% confidence that superintelligence would be developed within 30 years of human equivalence. Here MIRI researchers’ views differ significantly from AI experts’ median view; we expect AI systems to surpass humans relatively quickly once they near human equivalence.
Yes but GPT-3 offers us new evidence we should try to update on. It’s debatable to say how many bits of evidence that provides, but we can also update based on this Discontinuous progress in history: an update:
Growth rates sharply changed in many trends, and this seemed strongly associated with discontinuities. If you experience a discontinuity, it looks like there’s a good chance you’re hitting a new rate of progress, and should expect more of that.
AlphaGo was something we saw before we expected it. GPT-3 text generator was something we saw before we expected it. They were discontinuities.
I agree. I’m not sure how much to update on the things you mention or on other things that have happened since 2013, so I think my answer serves as more of jumping off point than something authoritative. I edited it to mention that.
See FAQ #4 on MIRI’s website below.
Edit: It was written in 2013 so it is probably best viewed as a jumping off point from which you can make further updates based on what has happened in the world since then.
Yes but GPT-3 offers us new evidence we should try to update on. It’s debatable to say how many bits of evidence that provides, but we can also update based on this Discontinuous progress in history: an update:
AlphaGo was something we saw before we expected it. GPT-3 text generator was something we saw before we expected it. They were discontinuities.
I agree. I’m not sure how much to update on the things you mention or on other things that have happened since 2013, so I think my answer serves as more of jumping off point than something authoritative. I edited it to mention that.