More timeline statements, from Eliezer in March 2016:
That said, timelines are the hardest part of AGI issues to forecast, by which I mean that if you ask me for a specific year, I throw up my hands and say “Not only do I not know, I make the much stronger statement that nobody else has good knowledge either.” Fermi said that positive-net-energy from nuclear power wouldn’t be possible for 50 years, two years before he oversaw the construction of the first pile of uranium bricks to go critical. The way these things work is that they look fifty years off to the slightly skeptical, and ten years later, they still look fifty years off, and then suddenly there’s a breakthrough and they look five years off, at which point they’re actually 2 to 20 years off.
If you hold a gun to my head and say “Infer your probability distribution from your own actions, you self-proclaimed Bayesian” then I think I seem to be planning for a time horizon between 8 and 40 years, but some of that because there’s very little I think I can do in less than 8 years, and, you know, if it takes longer than 40 years there’ll probably be some replanning to do anyway over that time period.
Since [August], senior staff at MIRI have reassessed their views on how far off artificial general intelligence (AGI) is and concluded that shorter timelines are more likely than they were previously thinking. [...]
There’s no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn’t developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI’s being developed before 2035 than we did a year or two ago.
I talked to Nate last month and he outlined the same concepts and arguments from Eliezer’s Oct. 2017 There’s No Fire Alarm for AGI (mentioned by Ben above) to describe his current view of timelines, in particular (quoting Eliezer’s post):
History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up. [...]
And again, that’s not to say that people saying “fifty years” is a certain sign that something is happening in a squash court; they were saying “fifty years” sixty years ago too. It’s saying that anyone who thinks technological timelines are actually forecastable, in advance, by people who are not looped in to the leading project’s progress reports and who don’t share all the best ideas about exactly how to do the thing and how much effort is required for that, is learning the wrong lesson from history. In particular, from reading history books that neatly lay out lines of progress and their visible signs that we all know now were important and evidential. It’s sometimes possible to say useful conditional things about the consequences of the big development whenever it happens, but it’s rarely possible to make confident predictions about the timing of those developments, beyond a one- or two-year horizon. And if you are one of the rare people who can call the timing, if people like that even exist, nobody else knows to pay attention to you and not to the Excited Futurists or Sober Skeptics. [...]
So far as I can presently estimate, now that we’ve had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.
By saying we’re probably going to be in roughly this epistemic state until almost the end, I don’t mean to say we know that AGI is imminent, or that there won’t be important new breakthroughs in AI in the intervening time. I mean that it’s hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky. Maybe researcher enthusiasm and funding will rise further, and we’ll be able to say that timelines are shortening; or maybe we’ll hit another AI winter, and we’ll know that’s a sign indicating that things will take longer than they would otherwise; but we still won’t know how long.
More timeline statements, from Eliezer in March 2016:
And from me in April 2017:
I talked to Nate last month and he outlined the same concepts and arguments from Eliezer’s Oct. 2017 There’s No Fire Alarm for AGI (mentioned by Ben above) to describe his current view of timelines, in particular (quoting Eliezer’s post):