This simple spreading activation model also crashes up against modern neuroscience research, which mostly contradicts the idea of a “grandmother cell”, ie a single neuron that represents a single concept like your grandmother.
...
Association between one idea and another is not through physical contiguity, but through similarities in the pattern. “Grandmother” probably has most of the same neurons in the same state as “grandfather”, and so it takes only a tiny stimulus to push the net from one attractor state to the other.
This is extremely unlikely. Associations can be made between concepts long after the patterns for those concepts have been learned. For a different explanation, see my 2000 article, A neuronal basis for the fan effect. It used the idea of convergence zones, promoted by Antonio Damasio (Damasio, A. R. (1990), Synchronous activation in multiple cortical regions: A mechanism for recall. The Neurosciences 2:287–296). My paper did this:
Have binary-neuron network 1 represent one concept by a collection of activated nodes
Have network 2 (or the same network in the next timestep) represent another concept the same way
Have a third network (the convergence zone) learn associations between patterns in those two networks, using the Amari/Hopfield algorithm.
Then the settling of the neurons in the convergence zone into a low-energy state causes the presence of one pattern in network 1 to recall an associated pattern in network 2, with dynamics and error rates that closely mimic John Anderson’s experiments on the quantitative measurement of spreading activation in humans.
(I was careful in my article to give credit to Amari, who invented the Hopfield network 10 years before Hopfield did. But I see now the editor “fixed” my reference to no longer give Amari priority.)
I know very little about connectionist networks beyond what I have read in a few review articles. I wrote this not because I was the best person to write it but because no one else has written anything on them yet and I had to stumble across a description of them while looking for other stuff, which upset me because I would have loved to have learned about them several years earlier. I would love if you or someone else who is an expert in the field would write something more up-to-date and accurate.
As far as I understand it the “grandmother cell” hypothesis is mostly dead. At least in artificial neural networks, they tend to favor representing concepts as highly distributed pattern. So “grandma” would activate a neuron that represents “old”, and another that represents “woman”. And often they don’t even form human interpretable patterns like that.
Here are some videos of Geoffrey Hinton explaining the idea of distributed representations:
A great example of this concept is word2vec, which learns distributed representations of words. You can take the vectors of each word that it learns and do cool stuff. Like “king”-”man”+”woman” returns a vector very close to the representation for “queen”.
And by representing concepts in fewer dimensions, you can generalize much better. If you know that old people have bad hearing, you can then predict grandma might have bad hearing.
...
This is extremely unlikely. Associations can be made between concepts long after the patterns for those concepts have been learned. For a different explanation, see my 2000 article, A neuronal basis for the fan effect. It used the idea of convergence zones, promoted by Antonio Damasio (Damasio, A. R. (1990), Synchronous activation in multiple cortical regions: A mechanism for recall. The Neurosciences 2:287–296). My paper did this:
Have binary-neuron network 1 represent one concept by a collection of activated nodes
Have network 2 (or the same network in the next timestep) represent another concept the same way
Have a third network (the convergence zone) learn associations between patterns in those two networks, using the Amari/Hopfield algorithm.
Then the settling of the neurons in the convergence zone into a low-energy state causes the presence of one pattern in network 1 to recall an associated pattern in network 2, with dynamics and error rates that closely mimic John Anderson’s experiments on the quantitative measurement of spreading activation in humans.
(I was careful in my article to give credit to Amari, who invented the Hopfield network 10 years before Hopfield did. But I see now the editor “fixed” my reference to no longer give Amari priority.)
Thank you.
I know very little about connectionist networks beyond what I have read in a few review articles. I wrote this not because I was the best person to write it but because no one else has written anything on them yet and I had to stumble across a description of them while looking for other stuff, which upset me because I would have loved to have learned about them several years earlier. I would love if you or someone else who is an expert in the field would write something more up-to-date and accurate.
As far as I understand it the “grandmother cell” hypothesis is mostly dead. At least in artificial neural networks, they tend to favor representing concepts as highly distributed pattern. So “grandma” would activate a neuron that represents “old”, and another that represents “woman”. And often they don’t even form human interpretable patterns like that.
Here are some videos of Geoffrey Hinton explaining the idea of distributed representations:
http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4a%20%5B199f7e86%5D%20.mp4
http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4b%20%5Bb6788b94%5D%20.mp4
A great example of this concept is word2vec, which learns distributed representations of words. You can take the vectors of each word that it learns and do cool stuff. Like “king”-”man”+”woman” returns a vector very close to the representation for “queen”.
And by representing concepts in fewer dimensions, you can generalize much better. If you know that old people have bad hearing, you can then predict grandma might have bad hearing.