The problem with AGI is not that AIs have no ability to learn “concepts”, it’s that the G in ‘AGI’ is very likely ill-defined. Even humans are not ‘general intelligences’, they’re just extremely capable aggregates of narrow intelligences that collectively implement the rather complex task we call “being a human”. Narrow AIs that implement ‘deep learning’ can learn ‘concepts’ that are tailored to their specific task; for instance. the DeepDream AI famously learns a variety of ‘concepts’ that relate to something looking like a dog. And sometimes these concepts turn out to be usable in a different task, but this is essentially a matter of luck. In the Amazon reviews case, the ‘sentiment’ of a review turned out to be a good predictor of what the review would say, even after controlling for the sorts of low-order correlations in the text that character-based RNNs can be expected to model most easily. I don’t see this as especially surprising, or as having much implication about possible ‘AGI’.
Humans are general intelligences, and that is exactly about having completely general concepts. Is there something you cannot think about? Suppose there is. Then let’s think about that thing. There is now nothing you cannot think about. No current computer AI can do this; when they can, they will in fact be AGIs.
The problem with AGI is not that AIs have no ability to learn “concepts”, it’s that the G in ‘AGI’ is very likely ill-defined. Even humans are not ‘general intelligences’, they’re just extremely capable aggregates of narrow intelligences that collectively implement the rather complex task we call “being a human”. Narrow AIs that implement ‘deep learning’ can learn ‘concepts’ that are tailored to their specific task; for instance. the DeepDream AI famously learns a variety of ‘concepts’ that relate to something looking like a dog. And sometimes these concepts turn out to be usable in a different task, but this is essentially a matter of luck. In the Amazon reviews case, the ‘sentiment’ of a review turned out to be a good predictor of what the review would say, even after controlling for the sorts of low-order correlations in the text that character-based RNNs can be expected to model most easily. I don’t see this as especially surprising, or as having much implication about possible ‘AGI’.
Humans are general intelligences, and that is exactly about having completely general concepts. Is there something you cannot think about? Suppose there is. Then let’s think about that thing. There is now nothing you cannot think about. No current computer AI can do this; when they can, they will in fact be AGIs.