I think it’s well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.
I recall notable researchers like Hinton making predictions that “X will take 5 years” and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:
In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.
Google recently announced a neural net chip that is 7 years ahead of Moore’s law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.
That’s just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google’s synthetic gradient paper or hypernetworks.
I think one of the biggest things holding the field back is that it’s all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don’t generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.
20 years ago the very first crude neural nets were just getting started
The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.
In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.
What’s about 20 years old is “deep learning”, which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That’s not quite fair. There’s been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)
Following this for 40 years things definitely seem to have sped up. Problems that seemed intractable like the dog/cat problem are now passe.
I see a confluence of three things: more powerful hardware allows more powerful algorithms to run, and makes testing possible and once possible, much faster.
Researchers still don’t have access to anywhere near the 10^15 flops that is roughly the human brain. Exciting times ahead.
I think it’s well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.
I recall notable researchers like Hinton making predictions that “X will take 5 years” and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:
In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.
Google recently announced a neural net chip that is 7 years ahead of Moore’s law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.
That’s just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google’s synthetic gradient paper or hypernetworks.
I think one of the biggest things holding the field back is that it’s all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don’t generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.
The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.
In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.
What’s about 20 years old is “deep learning”, which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That’s not quite fair. There’s been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)
Following this for 40 years things definitely seem to have sped up. Problems that seemed intractable like the dog/cat problem are now passe.
I see a confluence of three things: more powerful hardware allows more powerful algorithms to run, and makes testing possible and once possible, much faster.
Researchers still don’t have access to anywhere near the 10^15 flops that is roughly the human brain. Exciting times ahead.