Outside View only. That’s the way it’s always worked out before, and I’m not seeing anything specific to Deep Learning to suggest that this time, it will be different. But I am not a professional in this field.
So, some Inside View reasons to think this time might be different:
The results look better, and in particular, some of Google’s projects are reproducing high-level quirks of the human visual cortex.
The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn’t have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore’s Law is producing a difference in kind.
That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don’t run on logic at all. We’re making progress, and I wouldn’t be surprised if deep learning is part of the first AGI.
some of Google’s projects are reproducing high-level quirks of the human visual cortex.
While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn’t seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?
There is the ‘one learning algorithm’ hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc.
The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it’s auditory cortex just learned how to do vision instead.
which is more than you could say about e.g. symbolic logic; human minds don’t run on logic at all
This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?
Purely on Outside View grounds, or based on something more?
Outside View only. That’s the way it’s always worked out before, and I’m not seeing anything specific to Deep Learning to suggest that this time, it will be different. But I am not a professional in this field.
So, some Inside View reasons to think this time might be different:
The results look better, and in particular, some of Google’s projects are reproducing high-level quirks of the human visual cortex.
The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn’t have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore’s Law is producing a difference in kind.
That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don’t run on logic at all. We’re making progress, and I wouldn’t be surprised if deep learning is part of the first AGI.
While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn’t seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?
There is the ‘one learning algorithm’ hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc.
The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it’s auditory cortex just learned how to do vision instead.
This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?