As far as I understand it the “grandmother cell” hypothesis is mostly dead. At least in artificial neural networks, they tend to favor representing concepts as highly distributed pattern. So “grandma” would activate a neuron that represents “old”, and another that represents “woman”. And often they don’t even form human interpretable patterns like that.
Here are some videos of Geoffrey Hinton explaining the idea of distributed representations:
A great example of this concept is word2vec, which learns distributed representations of words. You can take the vectors of each word that it learns and do cool stuff. Like “king”-”man”+”woman” returns a vector very close to the representation for “queen”.
And by representing concepts in fewer dimensions, you can generalize much better. If you know that old people have bad hearing, you can then predict grandma might have bad hearing.
As far as I understand it the “grandmother cell” hypothesis is mostly dead. At least in artificial neural networks, they tend to favor representing concepts as highly distributed pattern. So “grandma” would activate a neuron that represents “old”, and another that represents “woman”. And often they don’t even form human interpretable patterns like that.
Here are some videos of Geoffrey Hinton explaining the idea of distributed representations:
http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4a%20%5B199f7e86%5D%20.mp4
http://d396qusza40orc.cloudfront.net/neuralnets/recoded_videos%2Flec4b%20%5Bb6788b94%5D%20.mp4
A great example of this concept is word2vec, which learns distributed representations of words. You can take the vectors of each word that it learns and do cool stuff. Like “king”-”man”+”woman” returns a vector very close to the representation for “queen”.
And by representing concepts in fewer dimensions, you can generalize much better. If you know that old people have bad hearing, you can then predict grandma might have bad hearing.