Yep!
I want to distinguish between “deep learning by itself is probably not general intelligence” (which I believe) and “nobody is making progress towards general intelligence” (which I’m uncertain about and definitely don’t think is safe to assume.)
What makes recent “deep learning” progress interesting to me is that traditionally there’s been a sort of paradox in AI: things we might naively think of as impressive achievements of the human intellect (e.g., grandmaster-level chess) turned out to be much easier to get computers to do than things we take for granted because even averagely intelligent children do them without much trouble (e.g., looking at a cat and saying “cat”) -- and deep neural networks seem (not hugely surprisingly, perhaps) to be a good approach to some of those.
That doesn’t, of course, mean that deep NNs + good old-fashioned AI = human-level intelligence. There are still what seem like important gaps that no one has very good ideas how to fill. But it does seem like one gap is getting somewhat filled.
It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.
A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of ‘stuff’ would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind’s universe, etc.
A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore’s law, databases, operating system, open source software, a computer science education system etc.
Yep! I want to distinguish between “deep learning by itself is probably not general intelligence” (which I believe) and “nobody is making progress towards general intelligence” (which I’m uncertain about and definitely don’t think is safe to assume.)
What makes recent “deep learning” progress interesting to me is that traditionally there’s been a sort of paradox in AI: things we might naively think of as impressive achievements of the human intellect (e.g., grandmaster-level chess) turned out to be much easier to get computers to do than things we take for granted because even averagely intelligent children do them without much trouble (e.g., looking at a cat and saying “cat”) -- and deep neural networks seem (not hugely surprisingly, perhaps) to be a good approach to some of those.
That doesn’t, of course, mean that deep NNs + good old-fashioned AI = human-level intelligence. There are still what seem like important gaps that no one has very good ideas how to fill. But it does seem like one gap is getting somewhat filled.
It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.
A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of ‘stuff’ would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind’s universe, etc.
A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore’s law, databases, operating system, open source software, a computer science education system etc.