Hmm, well I should say that my impression is that there’s frustrating lack of consensus on practically everything in systems neuroscience, but “brain doesn’t do backpropagation” seems about as close to consensus as anything. This Yoshua Bengio paper has a quick summary of the reasons:
The following difficulties can be raised regarding the biological plausibility of back-propagation: (1) the back-propagation computation (coming down from the output layer to lower hidden layers) is purely linear, whereas biological neurons interleave linear and non-linear operations, (2) if the feedback paths known to exist in the brain (with their own synapses and maybe their own neurons) were used to propagate credit assignment by backprop, they would need precise knowledge of the derivatives of the non-linearities at the operating point used in the corresponding feedforward computation on the feedforward path, (3) similarly, these feedback paths would have to use exact symmetric weights (with the same connectivity, transposed) of the feedforward connections, (4) real neurons communicate by (possibly stochastic) binary values (spikes), not by clean continuous values, (5) the computation would have to be precisely clocked to alternate between feedforward and back-propagation phases (since the latter needs the former’s results), and (6) it is not clear where the output targets would come from.
(UPDATE 1 YEAR LATER: after reading more Randall O’Reilly, I am now pretty convinced that error-driven learning is one aspect of neocortex learning, and I’m open-minded to the possibility that the errors can propagate up at least one or maybe two layers of hierarchy. Beyond that, I dunno, but brain hierarchies don’t go too much deeper than that anyway, I think.)
Then you ask the obvious followup question: “if not backprop, then what?” Well, this is unknown and controversial; the Yoshua Bengio paper above offers its own answer which I am disinclined to believe (but want to think about more). Of course there is more than one right answer; indeed, my general attitude is that if someone tells me a biologically-plausible learning mechanism, it’s probably in use somewhere in the brain, even if it’s only playing a very minor and obscure role in regulating heart rhythms or whatever, just because that’s the way evolution tends to work.
But anyway, I expect that the lion’s share of learning in the neocortex comes from just a few mechanisms. My favorite example is probably high-order sequence memory learning. There’s a really good story for that:
At the lowest level—biochemistry—we have Why Neurons Have Thousands of Synapses, a specific and biologically-plausible mechanism for the creation and deactivation of synapses.
At the middle level—algorithms—we have papers like this and this and this where Dileep George takes pretty much that exact algorithm (which he calls “cloned hidden markov model”), abstracted away from the biological implementation details, and shows that it displays all sorts of nice behavior in practice.
At the highest level—behavior—we have observable human behaviors, like the fact that we can hear a snippet of a song, and immediately know how that snippet continues, but still have trouble remembering the song title. And no matter how well we know a song, we cannot easily sing the notes in reverse order. Both of these are exactly as expected from the properties of this sequence memory algorithm.
This sequence memory thing obviously isn’t the whole story of what the neocortex does, but it fits together so well, I feel like it has to be one of the ingredients. :-)
Hmm, well I should say that my impression is that there’s frustrating lack of consensus on practically everything in systems neuroscience, but “brain doesn’t do backpropagation” seems about as close to consensus as anything. This Yoshua Bengio paper has a quick summary of the reasons:
(UPDATE 1 YEAR LATER: after reading more Randall O’Reilly, I am now pretty convinced that error-driven learning is one aspect of neocortex learning, and I’m open-minded to the possibility that the errors can propagate up at least one or maybe two layers of hierarchy. Beyond that, I dunno, but brain hierarchies don’t go too much deeper than that anyway, I think.)
Then you ask the obvious followup question: “if not backprop, then what?” Well, this is unknown and controversial; the Yoshua Bengio paper above offers its own answer which I am disinclined to believe (but want to think about more). Of course there is more than one right answer; indeed, my general attitude is that if someone tells me a biologically-plausible learning mechanism, it’s probably in use somewhere in the brain, even if it’s only playing a very minor and obscure role in regulating heart rhythms or whatever, just because that’s the way evolution tends to work.
But anyway, I expect that the lion’s share of learning in the neocortex comes from just a few mechanisms. My favorite example is probably high-order sequence memory learning. There’s a really good story for that:
At the lowest level—biochemistry—we have Why Neurons Have Thousands of Synapses, a specific and biologically-plausible mechanism for the creation and deactivation of synapses.
At the middle level—algorithms—we have papers like this and this and this where Dileep George takes pretty much that exact algorithm (which he calls “cloned hidden markov model”), abstracted away from the biological implementation details, and shows that it displays all sorts of nice behavior in practice.
At the highest level—behavior—we have observable human behaviors, like the fact that we can hear a snippet of a song, and immediately know how that snippet continues, but still have trouble remembering the song title. And no matter how well we know a song, we cannot easily sing the notes in reverse order. Both of these are exactly as expected from the properties of this sequence memory algorithm.
This sequence memory thing obviously isn’t the whole story of what the neocortex does, but it fits together so well, I feel like it has to be one of the ingredients. :-)