In other words, I largely agree with Ben Goertzel’s assertion that there is a fundamental difference between “narrow AI” and AI research that might eventually lead to machines capable of cognition, but I’m not sure I have good evidence for this argument.
One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren’t very impressive. Similarly, support vector machine’s have a lot of trouble learning anything that isn’t a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.
I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I’m wrong and there are sharp limits, I’d like to know. Thanks!
Machine intelligence has surpassed “human level” in a number of narrow domains. Already, humans can’t manipulate enough data to do anything remotely like a search engine or a stockbot can do.
The claim seems to be that in narrow domains there are often domain-specific “tricks”—that wind up not having much to do with general intelligence—e.g. see chess and go. This seems true—but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.
Those who make a big deal about the distinction between their projects and “mere” expert systems are probably mostly trying to market their projects before they are really experts at anything.
One of my videos discusses the issue of whether the path to superintelligent machines will be “broad” or “narrow”:
Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I’m wrong but I’m under the impression that
1) neutral networks cannot in general detect connected components unless the network has some form of recursion.
2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.
This is the sort of thing I was thinking of. Am I wrong about 1 or 2?
Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there’s no concern with recurrent loops.
On the whole, neural net research has slowed dramatically based on the common view you’ve expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.
Thanks. GeneRec sounds very interesting. Will take a look. Regarding 1, I was thinking of something like the theorems in chapter 9 in Perceptrons which shows that there are strong limits on what topological features of input a non-recursive neural net can recognize.
One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren’t very impressive. Similarly, support vector machine’s have a lot of trouble learning anything that isn’t a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.
I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I’m wrong and there are sharp limits, I’d like to know. Thanks!
Machine intelligence has surpassed “human level” in a number of narrow domains. Already, humans can’t manipulate enough data to do anything remotely like a search engine or a stockbot can do.
The claim seems to be that in narrow domains there are often domain-specific “tricks”—that wind up not having much to do with general intelligence—e.g. see chess and go. This seems true—but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.
Those who make a big deal about the distinction between their projects and “mere” expert systems are probably mostly trying to market their projects before they are really experts at anything.
One of my videos discusses the issue of whether the path to superintelligent machines will be “broad” or “narrow”:
http://alife.co.uk/essays/on_general_machine_intelligence_strategies/
Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I’m wrong but I’m under the impression that
1) neutral networks cannot in general detect connected components unless the network has some form of recursion. 2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.
This is the sort of thing I was thinking of. Am I wrong about 1 or 2?
Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there’s no concern with recurrent loops.
On the whole, neural net research has slowed dramatically based on the common view you’ve expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.
Thanks. GeneRec sounds very interesting. Will take a look. Regarding 1, I was thinking of something like the theorems in chapter 9 in Perceptrons which shows that there are strong limits on what topological features of input a non-recursive neural net can recognize.