Andrew M. Saxe, James L. McClelland, and Surya Ganguli, A mathematical theory of semantic development in deep neural networks, PNAS, vol. 116, no. 23, June 4, 2019, 11537-11546, https://www.pnas.org/content/116/23/11537
Abstract: An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: What are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.
BTW, the Annual Review of Condensed Matter Physics has an article on Statistical Mechanics of Deep Learning, by some people from Google Brain and Stanford. I believe the Annual Reviews are now all open access, so you might what to look around. The Annual Review of Linguistics might have some stuff for you.
Thanks very much for these comments and pointers. I’ll look at them closely and point some others at them too.
You might also look at this:
Andrew M. Saxe, James L. McClelland, and Surya Ganguli, A mathematical theory of semantic development in deep neural networks, PNAS, vol. 116, no. 23, June 4, 2019, 11537-11546, https://www.pnas.org/content/116/23/11537
Abstract: An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: What are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.
BTW, the Annual Review of Condensed Matter Physics has an article on Statistical Mechanics of Deep Learning, by some people from Google Brain and Stanford. I believe the Annual Reviews are now all open access, so you might what to look around. The Annual Review of Linguistics might have some stuff for you.
You’re welcome.