So if humanity had had no biological neural networks to steal the general idea and as proof of feasibility, would machine learning & AI be far behind where they are now?
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all.
I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.
If you have lots of labeled data you are more likely to use an SVM.
ImageNet is a lot of labeled data, to give one example.
As for major achievements—NNs are leading for now because …
There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you’re really just doing the former.
NNs connection to biology is very thin. Artificial neurons don’t look or act like regular neurons at all. But as a coined term to sell your research idea its great.
NNs are popular now for their deep learning properties and ability to learn features from unlabeled data (like edge detection).
Comparing NNs to SVMs isn’t really fair. You use the tool best for the job. If you have lots of labeled data you are more likely to use an SVM. It just depends on what problem you are being asked so solve. And of course you might feed an NNs output into an SVM or vice versa.
As for major achievements—NNs are leading for now because 1) most of the world’s data is unlabeled and 2) automated feature discovery (deep learning) is better then paying people to craft features.
I am well aware of that. Nevertheless, as a historical fact, they were inspired by real neurons, they do operate more like real neurons than do, say, SVMs or random forests, and this is the background to my original question.
ImageNet is a lot of labeled data, to give one example.
There is a difference between explaining, and explaining away. You seem to think you are doing the latter, while you’re really just doing the former.
SVMs are O(n^3) - if you have lots of data you shouldn’t use SVMs.