My reaction to those simple neural-net accounts of cognition is similar, in that I wanted very much to overcome their (pretty glaring) limitations. I wasn’t so much concerned with inability to handle Turing complete domains, as other more practical issues. But I came to a different conclusion about the value of probabilistic programming approaches, because that seems to force the real world to conform to the idealized world of a branch of mathematics, and, like Leonardo, I don’t like telling Nature what she should be doing with her designs. ;-)
Under the heading of ‘interesting history’ it might be worth mentioning that I hit my first frustration with neural nets at the very time that it was bursting into full bloom—I was part of the revolution that shook cognitive science in the mid to late 1980s. Even while it was in full swing, I was already going beyond it. And I have continued on that path ever since. Tragically, the bulk of NN researchers stayed loyal to the very simplistic systems invented in the first blush of that spring, and never seemed to really understand that they had boxed themselves into a dead end.
But I came to a different conclusion about the value of probabilistic programming approaches, because that seems to force the real world to conform to the idealized world of a branch of mathematics, and, like Leonardo, I don’t like telling Nature what she should be doing with her designs. ;-)
And I have continued on that path ever since. Tragically, the bulk of NN researchers stayed loyal to the very simplistic systems invented in the first blush of that spring, and never seemed to really understand that they had boxed themselves into a dead end.
Could you explain the kinds of neural networks beyond the standard feedforward, convolutional, and recurrent supervised networks? In particular, I’d really appreciating hearing a connectionist’s view on how unsupervised neural networks can learn to convert low-level sensory features into the kind of more abstracted, “objectified” (in the sense of “made objective”) features that can be used for the bottom, most concrete layer of causal modelling.
Ah, but Nature’s elegant design for an embodied creature is precisely a bounded-Bayesian reasoner! You just minimize the free energy of the environment.
Yikes! No. :-)
That paper couldn’t be a more perfect example of what I meant when I said
that seems to force the real world to conform to the idealized world of a branch of mathematics
In other words, the paper talks about a theoretical entity which is a descriptive model (not a functional model) of one aspect of human decision making behavior. That means you cannot jump to the conclusion that this is “nature’s design for an embodide creature”.
About your second question. I can only give you an overview, but the essential ingredient is that to go beyond the standard neural nets you need to consider neuron-like objects that are actually free to be created and destroyed like processes on a network, and which interact with one another using more elaborate, generalized versions of the rules that govern simple nets.
From there it is easy to get to unsupervised concept building because the spontaneous activity of these atoms (my preferred term) involves searching for minimum-energy* configurations that describe the world.
There is actually more than one type of ‘energy’ being simultaneously minimized in the systems I work on.
You can read a few more hints of this stuff in my 2010 paper with Trevor Harley (which is actually on a different topic, but I threw in a sketch of the cognitive system for purposes of illustrating my point in that paper).
Reference:
Loosemore, R.P.W. & Harley, T.A. (2010). Brains and Minds: On the Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl & S.J. Hanson (Eds.), Foundational Issues of Neuroimaging. Cambridge, MA: MIT Press. http://richardloosemore.com/docs/2010a_BrainImaging_rpwl_tah.pdf
My reaction to those simple neural-net accounts of cognition is similar, in that I wanted very much to overcome their (pretty glaring) limitations. I wasn’t so much concerned with inability to handle Turing complete domains, as other more practical issues. But I came to a different conclusion about the value of probabilistic programming approaches, because that seems to force the real world to conform to the idealized world of a branch of mathematics, and, like Leonardo, I don’t like telling Nature what she should be doing with her designs. ;-)
Under the heading of ‘interesting history’ it might be worth mentioning that I hit my first frustration with neural nets at the very time that it was bursting into full bloom—I was part of the revolution that shook cognitive science in the mid to late 1980s. Even while it was in full swing, I was already going beyond it. And I have continued on that path ever since. Tragically, the bulk of NN researchers stayed loyal to the very simplistic systems invented in the first blush of that spring, and never seemed to really understand that they had boxed themselves into a dead end.
Ah, but Nature’s elegant design for an embodied creature is precisely a bounded-Bayesian reasoner! You just minimize the free energy of the environment.
Could you explain the kinds of neural networks beyond the standard feedforward, convolutional, and recurrent supervised networks? In particular, I’d really appreciating hearing a connectionist’s view on how unsupervised neural networks can learn to convert low-level sensory features into the kind of more abstracted, “objectified” (in the sense of “made objective”) features that can be used for the bottom, most concrete layer of causal modelling.
Yikes! No. :-)
That paper couldn’t be a more perfect example of what I meant when I said
In other words, the paper talks about a theoretical entity which is a descriptive model (not a functional model) of one aspect of human decision making behavior. That means you cannot jump to the conclusion that this is “nature’s design for an embodide creature”.
About your second question. I can only give you an overview, but the essential ingredient is that to go beyond the standard neural nets you need to consider neuron-like objects that are actually free to be created and destroyed like processes on a network, and which interact with one another using more elaborate, generalized versions of the rules that govern simple nets.
From there it is easy to get to unsupervised concept building because the spontaneous activity of these atoms (my preferred term) involves searching for minimum-energy* configurations that describe the world.
There is actually more than one type of ‘energy’ being simultaneously minimized in the systems I work on.
You can read a few more hints of this stuff in my 2010 paper with Trevor Harley (which is actually on a different topic, but I threw in a sketch of the cognitive system for purposes of illustrating my point in that paper).
Reference: Loosemore, R.P.W. & Harley, T.A. (2010). Brains and Minds: On the Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl & S.J. Hanson (Eds.), Foundational Issues of Neuroimaging. Cambridge, MA: MIT Press. http://richardloosemore.com/docs/2010a_BrainImaging_rpwl_tah.pdf