In my view this is one of the most serious misconceptions about the entire field of machine learning. Sure, if you zoom out far enough, everything is a number or vector or matrix, but it’s rare that such a representation is the most convenient or precise one for formulating the learning problem.
a fun little article on training a neural network to recognize images.
https://medium.com/@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721#.k34ceav8z
and an integration paper too
Towards an integration of deep learning and neuroscience
http://biorxiv.org/content/early/2016/06/13/058545
In my view this is one of the most serious misconceptions about the entire field of machine learning. Sure, if you zoom out far enough, everything is a number or vector or matrix, but it’s rare that such a representation is the most convenient or precise one for formulating the learning problem.
Yeah, it shows how a reductionist viewpoint tends to dead end. But so hard to represent stuff in analog...