The only crossover that comes to mind for me is the vision deep learning ‘discovering’ edge detection. There also is some interest in sparse NN activation.
NNs had lost favor in the AI community after 1969 (minsky’s paper) and only have become popular again in the last decade
Yes, I’m familiar with the history. But how far would we be without the neural network work done since ~2001? The non-neural-network competitors on Imagenet like SVM are nowhere near human levels of performance, Watson required neural networks, Stanley won the DARPA Grand Challenge without neural networks because it had so many sensors but real self-driving cars will have to use neural networks, neural networks are why Google Translate has gone from roughly Babelfish levels (hysterically bad) to remarkably good, voice recognition has gone from mostly hypothetical to routine on smartphones...
What major AI achievements have SVMs or random forests racked up over the past decade comparable to any of that?
So if humanity had had no biological neural networks to steal the general idea and as proof of feasibility, would machine learning & AI be far behind where they are now?
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all.
I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.
If you have lots of labeled data you are more likely to use an SVM.
ImageNet is a lot of labeled data, to give one example.
As for major achievements—NNs are leading for now because …
There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you’re really just doing the former.
What year do you put the change in google translate? It didn’t switch to neural nets until 2012, right? Did anyone notice the change? My memory is that it was dramatically better than babelfish in 2007, let alone 2010.
Good question… I know that Google Translate began as a pretty bad outsourced translator (SYSTRAN) because I had a lot of trouble figuring out when Translate first came out for my Google survival analysis, and it began being upgraded and expanded almost constantly from ~2002 onwards. The 2007 switch was supposedly from the company SYSTRAN to an internal system, but what does that mean? SYSTRAN is a proprietary company which could be using anything it wants internally, and admits it’s a hybrid system. The 2006 beta just calls it statistics and machine learning, with no details about what this means. Google Scholar’s no help here either—hits are swamped by research papers mentioning Translate, and a few more recent hits about the neural networks used in various recent Google mobile-oriented services like speech or image recognition.
So… I have no idea. Highly unlikely to predate their internal translator in 2006, anyway, but could be your 2012 date.
I don’t think we would be that far behind.
NNs had lost favor in the AI community after 1969 (minsky’s paper) and only have become popular again in the last decade. see http://en.wikipedia.org/wiki/Artificial_neural_network
The only crossover that comes to mind for me is the vision deep learning ‘discovering’ edge detection. There also is some interest in sparse NN activation.
Yes, I’m familiar with the history. But how far would we be without the neural network work done since ~2001? The non-neural-network competitors on Imagenet like SVM are nowhere near human levels of performance, Watson required neural networks, Stanley won the DARPA Grand Challenge without neural networks because it had so many sensors but real self-driving cars will have to use neural networks, neural networks are why Google Translate has gone from roughly Babelfish levels (hysterically bad) to remarkably good, voice recognition has gone from mostly hypothetical to routine on smartphones...
What major AI achievements have SVMs or random forests racked up over the past decade comparable to any of that?
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all. But as a coined term to sell your research idea its great.
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.
I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.
ImageNet is a lot of labeled data, to give one example.
There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you’re really just doing the former.
SVMs are O(n^3) - if you have lots of data you shouldn’t use SVMs.
What year do you put the change in google translate? It didn’t switch to neural nets until 2012, right? Did anyone notice the change? My memory is that it was dramatically better than babelfish in 2007, let alone 2010.
Good question… I know that Google Translate began as a pretty bad outsourced translator (SYSTRAN) because I had a lot of trouble figuring out when Translate first came out for my Google survival analysis, and it began being upgraded and expanded almost constantly from ~2002 onwards. The 2007 switch was supposedly from the company SYSTRAN to an internal system, but what does that mean? SYSTRAN is a proprietary company which could be using anything it wants internally, and admits it’s a hybrid system. The 2006 beta just calls it statistics and machine learning, with no details about what this means. Google Scholar’s no help here either—hits are swamped by research papers mentioning Translate, and a few more recent hits about the neural networks used in various recent Google mobile-oriented services like speech or image recognition.
So… I have no idea. Highly unlikely to predate their internal translator in 2006, anyway, but could be your 2012 date.
Here is a 2007 paper that I found when I was writing the above. I don’t remember how I found it, or why I think it representative, though.