Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I’d breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I’d be writing mostly with an eye to my successors. But it just doesn’t seem like ninety more years out is a reasonable median estimate. I’d expect bloody uploads before 2100.
AI has had 60 years or more, depending on when you start counting, with (the price-performance cognate of) Moore’s law running through that time: the progress we’ve seen reflects both hardware and software innovation. Hardware progress probably slows dramatically this century, although neuroscience knowledge should get better.
Looking at a lot of software improvement curves for specific domains (games, speech, vision, navigation) big innovations don’t seem to be coming much faster than they used to, and trend projection suggests decades to reach human performance on many tasks which seem far from AGI-complete. Technologies like solar panels or electric vehicles can take many decades to become useful enough to compete with rivals.
Intermediate AI progress has fallen short of Kurzweilian predictions, although it’s still decent. Among AI people AGI before the middle of the century is a view seen mainly in groups selected for AGI enthusiasm, like the folk at the AGI conference, but less so among the broader AI community. And there’s Robin’s progress metric (although it still hasn’t been done for other fields, especially the ones making the most progress).
Are we halfway there, assuming we can manage to keep up this much progress (when progress in many other technological fields is slowing)?
Intelligence enhancement for researchers, uploads, and other boosts could help a lot, but IA will probably be a long time coming (biology is slow: FDA for drugs, maturation for genetic engineering) and uploads are very demanding of hardware technology and require much better brain models (correlated with AI difficulty).
I didn’t say 87 years, but closer to 87 than 32 (or 16, for Kurzweil’s prediction of a Turing-Test passing AI).
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we’ve got working implants that replicate parts of hippocampal and cerebellar function for rats. At least one computational neuroscientist that I know of has told me that we could replicate the human cerebellum as well pretty soon, but the hard problem lies in finding suitable connections that could be used to interface the brain with computers well enough. He was also willing to go on record on neocortex prostheses not being that far away.
If we did have neural prostheses—the installation of which might end up becoming a routine medical procedure—they could no doubt be set to also record any surrounding brain activity, thus helping reverse engineer the parts we don’t have figured out yet. Privacy issues might limit the extent to which that was done with humans, but less so for animals. X years to neuroprosthesis-driven cat uploads and then Y years to figuring out their neural algorithms and then creating better versions of those to get more-or-less neuromorphic AGIs.
The main crucial variables for estimating X would be the ability to manufacture sufficiently small chips to replace brain function with, and the ability to reliably interface them with the brain without risk of rejection or infection. I don’t know how the latter is currently projected to develop.
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we’ve got working implants that replicate parts of hippocampal and cerebellar function for rats.
The hippocampal implant has been extended to monkeys.
This is a rather important point. How do we get more info on it? You’re the first halfway-sane person I’ve ever heard put the median at 2100.
From my perspective if you told me that in actual fact AGI had been developed in 2120 (a bit of a ways after your median) despite the lack of any great catastrophes, I would update in the direction of believing all of the following:
Rogue biotech hadn’t actually been a danger. You didn’t make any strong predictions about this because it was outside your conditional; I don’t know much about it either. Basically I’m just noting it down. Also, no total global worse-than-Greece collapse, no nuclear-proliferated war brought on by global warming, etc.
Moore’s Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013.
AI academia was Great Stagnating (this is relatively easy to believe)
Machine learning techniques that actually had non-stagnat-y people pushing on them for stock-market trading also plateaued, or weren’t published, or never AGI-generalized.
All the Foresight people were really really optimistic about nanotech, nobody cracked protein folding, or that field Great Stagnated somehow… the nanotech-related news I see, especially about protein folding, doesn’t seem to square with this, but perhaps the press releases are exaggerated.
Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etcetera.
Biotech stays regulation-locked forever—not too hard to believe.
Anders Sandberg is wrong about basically everything to do with uploading.
It seems like I’d have to execute a lot of updates. How do we resolve this?
Moore’s Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore’s law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That’s a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn’t be surprising if it kept going for some time longer.
AI academia was Great Stagnating (this is relatively easy to believe)
The so-called “Great Stagnation” isn’t actually a stagnation, it’s mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
All the Foresight people were really really optimistic about nanotech
Haven’t they been so far?
In any case, nanotechnology can’t shrink feature sizes below atomic scale, and that’s already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren’t obviously that helpful.
perhaps the press releases are exaggerated
Could you give some examples of what you had in mind?
Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etc.
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Biotech stays regulation-locked forever—not too hard to believe.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That’s a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
Anders Sandberg is wrong about basically everything to do with uploading.
See 15:30 of this talk, Anders’ Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
You’re the first halfway-sane person I’ve ever heard put the median at 2100.
vs
I didn’t say 87 years, but closer to 87 than 32 (or 16, for Kurzweil’s prediction of a Turing-Test passing AI).
I said “near the end of the century” contrasted to a prediction of intelligence explosion in 2045.
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards.
A very tangential point, but in his 1998 book Robot, Hans Moravec speculates about atoms made from alternative subatomic particles that are smaller and able to absorb, transmit and emit more energy than the versions made from electrons and protons.
That doesn’t apply to large proteins yet, but it doesn’t make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
It’s also worth pointing that conventional computers could already solve these particular protein folding problems.
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
“I no longer feel like playing an adversarial role. I really, genuinely hope D-Wave succeeds.” That said, he noted that D-Wave still hadn’t provided proof of a critical test of quantum computing.
I am not qualified to judge whether the D-Wave’s claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
I had thought that the question of AI timelines was so central that the core SI research community would have long since Aumannated and come to a consensus probability distribution.
Maybe I was absent from the office that day? I hadn’t heard Carl’s 2083 estimate (I recently asked him in person what the actual median was, and he averaged his last several predictions together to get 2083) until now, and it was indeed outside what I thought was our Aumann-range, hence my surprise.
I agree with that comment that machine learning has been on a roll, but Robin’s reply is important too. We can ask how machine learning shows up in the performance statistics for particular tasks to think about its relative contribution.
Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I’d breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I’d be writing mostly with an eye to my successors. But it just doesn’t seem like ninety more years out is a reasonable median estimate. I’d expect bloody uploads before 2100.
Carl, ???
AI has had 60 years or more, depending on when you start counting, with (the price-performance cognate of) Moore’s law running through that time: the progress we’ve seen reflects both hardware and software innovation. Hardware progress probably slows dramatically this century, although neuroscience knowledge should get better.
Looking at a lot of software improvement curves for specific domains (games, speech, vision, navigation) big innovations don’t seem to be coming much faster than they used to, and trend projection suggests decades to reach human performance on many tasks which seem far from AGI-complete. Technologies like solar panels or electric vehicles can take many decades to become useful enough to compete with rivals.
Intermediate AI progress has fallen short of Kurzweilian predictions, although it’s still decent. Among AI people AGI before the middle of the century is a view seen mainly in groups selected for AGI enthusiasm, like the folk at the AGI conference, but less so among the broader AI community. And there’s Robin’s progress metric (although it still hasn’t been done for other fields, especially the ones making the most progress).
Are we halfway there, assuming we can manage to keep up this much progress (when progress in many other technological fields is slowing)?
Intelligence enhancement for researchers, uploads, and other boosts could help a lot, but IA will probably be a long time coming (biology is slow: FDA for drugs, maturation for genetic engineering) and uploads are very demanding of hardware technology and require much better brain models (correlated with AI difficulty).
I didn’t say 87 years, but closer to 87 than 32 (or 16, for Kurzweil’s prediction of a Turing-Test passing AI).
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we’ve got working implants that replicate parts of hippocampal and cerebellar function for rats. At least one computational neuroscientist that I know of has told me that we could replicate the human cerebellum as well pretty soon, but the hard problem lies in finding suitable connections that could be used to interface the brain with computers well enough. He was also willing to go on record on neocortex prostheses not being that far away.
If we did have neural prostheses—the installation of which might end up becoming a routine medical procedure—they could no doubt be set to also record any surrounding brain activity, thus helping reverse engineer the parts we don’t have figured out yet. Privacy issues might limit the extent to which that was done with humans, but less so for animals. X years to neuroprosthesis-driven cat uploads and then Y years to figuring out their neural algorithms and then creating better versions of those to get more-or-less neuromorphic AGIs.
The main crucial variables for estimating X would be the ability to manufacture sufficiently small chips to replace brain function with, and the ability to reliably interface them with the brain without risk of rejection or infection. I don’t know how the latter is currently projected to develop.
The hippocampal implant has been extended to monkeys.
I want one!
Thanks, I’d missed that.
This is a rather important point. How do we get more info on it? You’re the first halfway-sane person I’ve ever heard put the median at 2100.
From my perspective if you told me that in actual fact AGI had been developed in 2120 (a bit of a ways after your median) despite the lack of any great catastrophes, I would update in the direction of believing all of the following:
Rogue biotech hadn’t actually been a danger. You didn’t make any strong predictions about this because it was outside your conditional; I don’t know much about it either. Basically I’m just noting it down. Also, no total global worse-than-Greece collapse, no nuclear-proliferated war brought on by global warming, etc.
Moore’s Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013.
AI academia was Great Stagnating (this is relatively easy to believe)
Machine learning techniques that actually had non-stagnat-y people pushing on them for stock-market trading also plateaued, or weren’t published, or never AGI-generalized.
All the Foresight people were really really optimistic about nanotech, nobody cracked protein folding, or that field Great Stagnated somehow… the nanotech-related news I see, especially about protein folding, doesn’t seem to square with this, but perhaps the press releases are exaggerated.
Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etcetera.
Biotech stays regulation-locked forever—not too hard to believe.
Anders Sandberg is wrong about basically everything to do with uploading.
It seems like I’d have to execute a lot of updates. How do we resolve this?
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore’s law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That’s a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn’t be surprising if it kept going for some time longer.
The so-called “Great Stagnation” isn’t actually a stagnation, it’s mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
Haven’t they been so far?
In any case, nanotechnology can’t shrink feature sizes below atomic scale, and that’s already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren’t obviously that helpful.
Could you give some examples of what you had in mind?
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That’s a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
See 15:30 of this talk, Anders’ Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
vs
I said “near the end of the century” contrasted to a prediction of intelligence explosion in 2045.
A very tangential point, but in his 1998 book Robot, Hans Moravec speculates about atoms made from alternative subatomic particles that are smaller and able to absorb, transmit and emit more energy than the versions made from electrons and protons.
Here’s one: http://phys.org/news/2012-08-d-wave-quantum-method-protein-problem.html
That doesn’t apply to large proteins yet, but it doesn’t make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
http://blogs.nature.com/news/2012/08/d-wave-quantum-computer-solves-protein-folding-problem.html
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
ETA: https://plus.google.com/103530621949492999968/posts/U11X8sec1pU
The G+ post explains what it’s good for pretty well, doesn’t it?
It’s not a dramatic improvement (yet), but it’s a larger potential speedup than anything else I’ve seen on the protein-folding problem lately.
You can duplicate that D-Wave machine on a laptop.
True, but somewhat besides the point; it’s the asymptotic speedup that’s interesting.
...you know, assuming the thing actually does what they claim it does. sigh
Also no asymptotic speedup.
Nobody believes in D-Wave.
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
I am not qualified to judge whether the D-Wave’s claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
Maybe they could get Andrea Rossi to confirm.
This is puzzling.
I had thought that the question of AI timelines was so central that the core SI research community would have long since Aumannated and come to a consensus probability distribution.
Anyway, good you’re doing it now.
Maybe I was absent from the office that day? I hadn’t heard Carl’s 2083 estimate (I recently asked him in person what the actual median was, and he averaged his last several predictions together to get 2083) until now, and it was indeed outside what I thought was our Aumann-range, hence my surprise.
It seems like the sort of thing people would plan to do on a day you were going to be in the office.
We had discussed timelines to this effect last year.
I’m wondering why this is stated as a conjunction. Would a single failure here really result in an early AGI development?
If I go in the garage and observe that the floor is wet, I would update in the direction of
it rained last night; and
Frank left the garage door open again; and
the tools the boys negligently left outside in the grass all night got rusty.
But that of course does not mean that if the tools had not gotten rusty, the garage floor would not have gotten wet.
In other words, Eliezer was writing about statistical relationships, and you seem to have mistaken them for causal relationships.
BTW regarding Robin’s AI progress metric, my reaction is more like Doug’s (the first / most upvoted comment).
I agree with that comment that machine learning has been on a roll, but Robin’s reply is important too. We can ask how machine learning shows up in the performance statistics for particular tasks to think about its relative contribution.