At the very least, there are networks of artificial neurons. You seem to accept Ilya’s dismissal of the artificial neuron as too simple to credit, but take the networks as the biologically inspired part. I view those components exactly opposite.
Networks of simple components come up everywhere. There were circuits of electrical components a century ago. A parsed computer program is a network of simple components. Many people doing genetic programming (inspired by biology, but not neurology) work with such trees or networks. Selfridge’s Pandemonium (1958) advocated features built of features, but I think it was inspired by introspective psychology, not neuroscience.
Whereas the common artificial neuron seems crazy to me. It doesn’t matter how simple it is, if it is unmotivated. What seems crazy to me is the biologically inspired idea of a discrete output. Why have a threshold or probabilistic firing in the middle of the network? Of course, you want something like that at the very end of a discrimination task, so maybe you’d think of recycling it into the middle, but not me. I have heard it described as a kind of regularization, so maybe people would have come up with it by thinking about regularization. Or maybe it could be replaced with other regularizations. And a lot of methods have been adapted to real outputs, so maybe the discrete outputs didn’t matter.
So that’s the “neural” part and the “network” part, but there are a lot more algorithms that go into recent work. For example, Boltzmann machines are named as if they come from physics, but supposedly they were invented by a neuroscientist because they can be trained in a local way that is biologically realistic. (Except I think it’s only RBMs that have that property, so the neuroscientist failed in the short term, or the story is complete nonsense.) Markov random fields did come out of physics and maybe they could have lead to everything else.
How many components go into “neural nets”?
At the very least, there are networks of artificial neurons. You seem to accept Ilya’s dismissal of the artificial neuron as too simple to credit, but take the networks as the biologically inspired part. I view those components exactly opposite.
Networks of simple components come up everywhere. There were circuits of electrical components a century ago. A parsed computer program is a network of simple components. Many people doing genetic programming (inspired by biology, but not neurology) work with such trees or networks. Selfridge’s Pandemonium (1958) advocated features built of features, but I think it was inspired by introspective psychology, not neuroscience.
Whereas the common artificial neuron seems crazy to me. It doesn’t matter how simple it is, if it is unmotivated. What seems crazy to me is the biologically inspired idea of a discrete output. Why have a threshold or probabilistic firing in the middle of the network? Of course, you want something like that at the very end of a discrimination task, so maybe you’d think of recycling it into the middle, but not me. I have heard it described as a kind of regularization, so maybe people would have come up with it by thinking about regularization. Or maybe it could be replaced with other regularizations. And a lot of methods have been adapted to real outputs, so maybe the discrete outputs didn’t matter.
So that’s the “neural” part and the “network” part, but there are a lot more algorithms that go into recent work. For example, Boltzmann machines are named as if they come from physics, but supposedly they were invented by a neuroscientist because they can be trained in a local way that is biologically realistic. (Except I think it’s only RBMs that have that property, so the neuroscientist failed in the short term, or the story is complete nonsense.) Markov random fields did come out of physics and maybe they could have lead to everything else.