The premise that “human-level AI” must be built around some form on some form of learning (and the implication that learning is what needs to be improved) is highly dubious (not evidenced enough, at all, and completely at odds with my own intuitions besides).
As it is, deep learning can be seen “simply” as a way to approximate a mathematical function. In the case of computer vision, own could see it as a function that twiddles with the images’ pixels and outputs a result. The genius of the approach is how relatively fast we can find a function that approximates the process of interest (compared to say classical search algorithms). A big caveat: human intuition is still required in finding the right parameters to tweak the network, but it’s very conceivable that this could be improved.
Nevertheless, we don’t have human-level AI here. At the very best we can hope for, we have it’s pattern matching component. Which is an important component to be sure, but we still don’t have an understanding of “concepts”, there is no “reflection” as understood in computer science (a form of meta-programming where programming language concepts are reified and available to the programmer using the language). We need the ability to form new concepts—some of which will be patterns, but also to reason about the concepts themselves, to pattern-match on them. In short, to think about thinking. It seems like in that regard, we’re still a long way.
I think part of the assumption is that reflection can be bolted on trivially if the pattern matching is good enough. For example, consider guiding an SMT / automatic theorem prover by deep-learned heuristics, e.g. (https://arxiv.org/abs/1701.06972)[https://arxiv.org/abs/1701.06972] . We know how to express reflection in formal languages; we know how to train intuition for fuzzy stuff; me might learn how to train intuition for formal languages.
This is still borderline useless; but there is no reason, a priori, that such approached are doomed to fail. Especially since labels for training data are trivial (check the proof for correctness) and machine-discovered theorems / proofs can be added to the corpus.
The premise that “human-level AI” must be built around some form on some form of learning (and the implication that learning is what needs to be improved) is highly dubious (not evidenced enough, at all, and completely at odds with my own intuitions besides).
As it is, deep learning can be seen “simply” as a way to approximate a mathematical function. In the case of computer vision, own could see it as a function that twiddles with the images’ pixels and outputs a result. The genius of the approach is how relatively fast we can find a function that approximates the process of interest (compared to say classical search algorithms). A big caveat: human intuition is still required in finding the right parameters to tweak the network, but it’s very conceivable that this could be improved.
Nevertheless, we don’t have human-level AI here. At the very best we can hope for, we have it’s pattern matching component. Which is an important component to be sure, but we still don’t have an understanding of “concepts”, there is no “reflection” as understood in computer science (a form of meta-programming where programming language concepts are reified and available to the programmer using the language). We need the ability to form new concepts—some of which will be patterns, but also to reason about the concepts themselves, to pattern-match on them. In short, to think about thinking. It seems like in that regard, we’re still a long way.
I think part of the assumption is that reflection can be bolted on trivially if the pattern matching is good enough. For example, consider guiding an SMT / automatic theorem prover by deep-learned heuristics, e.g. (https://arxiv.org/abs/1701.06972)[https://arxiv.org/abs/1701.06972] . We know how to express reflection in formal languages; we know how to train intuition for fuzzy stuff; me might learn how to train intuition for formal languages.
This is still borderline useless; but there is no reason, a priori, that such approached are doomed to fail. Especially since labels for training data are trivial (check the proof for correctness) and machine-discovered theorems / proofs can be added to the corpus.