General AI will not be made in 2011. Confidence: 90%.
Seems underconfident. Shane Legg, who is an AGI researcher, predicts:
My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025.
Although it really depends on what you mean by “General AI.”
How representative is Legg’s prediction of people in that field? It “feels” very optimistic, but I don’t have the relevant expertise. What do other researchers think?
Anecdotal evidence suggests it assigns way more probability mass to the next few decades (leaving almost none for subsequent development, social collapse, etc) than the vast majority of researchers. Surveys have been conducted among folk strongly selected for belief in AI (e.g. at Ben’s AGI conference linked below, on transhumanist websites, etc) that put a big chunk of probability in that period, but usually not as much as Legg.
Unfortunately, there aren’t yet any representative expert polls, and it would be hard to get an expert class that was expert on neuroscience, AI, and outside factors that could speed up progress (e.g. biotech enhancement). Worse, where folk have expressed skeptical views, they have almost always been just negatives with respect to a particular date, rather than probabilities. It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
EDIT: Here’s another survey.
26 of the contributors to the NSF-backed “Managing nano-bio-info-cogno innovations: converging technologies in society” were surveyed on the date at which various technologies were developed, and the median predictions reported in Appendix 1.
Page 344 gives a median estimate of 2085 for AI functionally equivalent to the human brain:
This is handy as a less selected group than attendees at an Artificial General Intelligence conference, although they’re still folk with a professional interest in futuristic technologies generally.
It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
Shane Legg says that there is a 95% probability of human-level AI by 2045.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
Seems underconfident. Shane Legg, who is an AGI researcher, predicts:
Although it really depends on what you mean by “General AI.”
How representative is Legg’s prediction of people in that field? It “feels” very optimistic, but I don’t have the relevant expertise. What do other researchers think?
My page has some graphs:
http://alife.co.uk/essays/how_long_before_superintelligence/
Ben covered this issue in a 2010 paper:
How long until human-level AI? Results from an expert assessment
Anecdotal evidence suggests it assigns way more probability mass to the next few decades (leaving almost none for subsequent development, social collapse, etc) than the vast majority of researchers. Surveys have been conducted among folk strongly selected for belief in AI (e.g. at Ben’s AGI conference linked below, on transhumanist websites, etc) that put a big chunk of probability in that period, but usually not as much as Legg.
Unfortunately, there aren’t yet any representative expert polls, and it would be hard to get an expert class that was expert on neuroscience, AI, and outside factors that could speed up progress (e.g. biotech enhancement). Worse, where folk have expressed skeptical views, they have almost always been just negatives with respect to a particular date, rather than probabilities. It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
EDIT: Here’s another survey.
26 of the contributors to the NSF-backed “Managing nano-bio-info-cogno innovations: converging technologies in society” were surveyed on the date at which various technologies were developed, and the median predictions reported in Appendix 1.
Page 344 gives a median estimate of 2085 for AI functionally equivalent to the human brain:
This is handy as a less selected group than attendees at an Artificial General Intelligence conference, although they’re still folk with a professional interest in futuristic technologies generally.
This is helpful. One question, though:
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
Cool, thanks.