Haven’t read it, but my default guess based on approximately-zero exposure would be that it’s one of those “theories” which basically says “hey let’s stick lots of simple units together and run simulations on them”, while answering basically-zero questions which I actually care about. (For instance: does it tell me how internal structures or activation-patterns in a mental system will map to specific structures in the external environment? Or how to detect the formation of general-purpose search?) Or, a lower bar: does it make any universal quantitative predictions about neural systems at all? (No, “bigger system do better” does not count as quantitative. If it successfully predicted e.g. the parameters of the scaling curves of model nets 20+ years before those nets were built, then I’d definitely pay attention.) I’d be surprised if there’s any real model of cognition there at all, as opposed to somebody just vibing that neural-network-style stuff is vaguely good somehow.
So to answer your question: because I have those expectations, I haven’t looked into the topic. If my expectations are wrong, then maybe that’s a mistake on my part.
I don’t have an adequate answer for this, since these models are incomplete. But the way I see it is that these people had a certain way of mathematically reasoning about cognition (Hinton, Rumelhart, McClelland, Smolensky), and that reasoning created most of the breakthroughs we see today in AI (backprop, multi-layed models, etc.) It seems trying to utilize that model of cognition could give rise to new insights about the questions you’re asking, attack the problem from a different angle, or help create a grounded paradigm for alignment research to build on.
Haven’t read it, but my default guess based on approximately-zero exposure would be that it’s one of those “theories” which basically says “hey let’s stick lots of simple units together and run simulations on them”, while answering basically-zero questions which I actually care about. (For instance: does it tell me how internal structures or activation-patterns in a mental system will map to specific structures in the external environment? Or how to detect the formation of general-purpose search?) Or, a lower bar: does it make any universal quantitative predictions about neural systems at all? (No, “bigger system do better” does not count as quantitative. If it successfully predicted e.g. the parameters of the scaling curves of model nets 20+ years before those nets were built, then I’d definitely pay attention.) I’d be surprised if there’s any real model of cognition there at all, as opposed to somebody just vibing that neural-network-style stuff is vaguely good somehow.
So to answer your question: because I have those expectations, I haven’t looked into the topic. If my expectations are wrong, then maybe that’s a mistake on my part.
I don’t have an adequate answer for this, since these models are incomplete. But the way I see it is that these people had a certain way of mathematically reasoning about cognition (Hinton, Rumelhart, McClelland, Smolensky), and that reasoning created most of the breakthroughs we see today in AI (backprop, multi-layed models, etc.) It seems trying to utilize that model of cognition could give rise to new insights about the questions you’re asking, attack the problem from a different angle, or help create a grounded paradigm for alignment research to build on.