According to [the computational] approach, when I return a serve in tennis, what happens is roughly as follows. Light from the approaching ball strikes my retina and my brain’s visual mechanisms quickly compute what is being seen (a ball) and its direction and rate of approach. This information is fed to a planning system which holds representations of my current goals (win the game, return the serve, etc.) and other background knowledge (court conditions, weaknesses of the other player, etc.). The planning system then infers what I must do: hit the ball deep into my opponent’s backhand. This command is issued to the motor system. My arms and legs move as required.
In its most familiar and strike and successful applications, the computational approach makes a series of further assumptions. Representations are static structures of discrete symbols. Cognitive operations are transformations from one static symbol structure to the next. These transformations are discrete, effectively instantaneous, and sequential. The mental computer is broken down into a number of modules responsible for different symbol-processing tasks. A module takes symbolic representations as inputs and computes symbolic representations as outputs.
This is indeed a popular formulation of the computational theory of mind originally defended by Putnam and Fodor, but I’m not sure I’ve seen it endorsed in so many incorrect details by a major Less Wrong author. For example my post Neuroscience of Human Motivation disagrees with the above description on several points.
I’m not sure the implementation details are particularly relevant to his main argument though. The central concern is that computation is step-wise whereas dynamicism is continuous in time. So a computational approach, by definition, will break a task into a sequence of steps and these have an order but not an inherent time-scale. (It’s hard to see how an approach would be computationalist at all if this were not the case.) This has consequences for typical LessWrong theses. For example, speeding up the substrate for a computation has an obvious result: each step will be executed faster. If we have a computation consisting of three steps, S1 → S2 → S3, and each one takes 10 ms and we speed it up by a factor of 10 we’ll have a computation that executes in 3 ms instead of 30 ms. But if we have a dynamical equation describing the system this isn’t the case. I can speak of the system moving between states—say, S(t) → S(t+1) - but if we speed up the components involved by 10x (say, these are neural states, and we’re speeding up the neurons) I don’t get the same thing but faster, I get something else entirely. Perhaps the result would be greater sensitivity to shorter time-scales but given that the brain is likely temporally organised I’m inclined to think what I’d get would be a brain that doesn’t work at all.
See, this is why I make LW discussion posts asking where I need to say “oops.” :)
I encountered dynamical systems when I read my first cogsci textbook, and was probably too influenced by its take on dynamical systems. Here’s what Bermudez writes on pages 429-430:
We started out… with the idea that the dynamical systems approach might be a radical alternative to some of the basic assumptions of cognitive science — and in particular to the idea that cognition essentially involves computation and information processing. Some proponents of the dynamical systems approach have certainly made some very strong claims in this direction. Van Gelder, for example, has suggested that the dynamical systems model will in time completely supplant computational models, so that traditional cognitive science will end up looking as quaint (and as fundamentally misconceived) as the computational governor.
[But] as we have seen throughout this book, cognitive science is both interdisciplinary and multi-level… This applies to the dynamical systems hypothesis no less than to anything else. There is no more chance of gaining a complete picture of the mind through dynamical systems theory than there is of gaining a complete account through neurobiology, say, or AI....
...Dynamical systems models are perfectly compatible with information-processing models of cognition. Dynamical systems models operate at a higher level of abstraction. They allow cognitive scientists to abstract away from details of information-processing mechanisms in order to study how systems evolve over time. But even when we have a model of how a cognitive system evolves over time we will still need an account of what makes it possible for the system to evolve in those ways.
Bermudez illustrates his point by saying that dynamical systems theory can do a good job of modeling traffic jams, but this doesn’t mean we no longer have to think about internal combustion engines, gasoline, etc.
I think it’s essentially begging the question. Van Gelder is questioning whether there is computation going on at all, so to say that dynamical systems abstract away from the details of the information-processing mechanisms is obviously to assume that computation is going on. That might be a way somebody already committed to computationalism could look to incorporate dynamical systems theory but it’s not a response to Van Gelder. This is obvious from the traffic analogy. The dynamical account of traffic is obviously an abstraction from what actually happens (internal combustion engines, gasoline, etc). But the analogy only holds with cognitive science if you assume what actually happens in cognitive systems to be computation. What Van Gelder is doing is criticising computationalism for not be able to properly account for things that are critical to cognition (such as evolution in time). It’s not clear to me what it could mean to abstract away from computational models in order to study how systems evolve over time if those models do not themselves say anything about how they evolve over time. I think Van Gelder addresses this. It’s difficult to get an algorithmic model to be time-sensitive.
That said, whether the dynamical approach alone is adequate to capture everything about cognition is another matter. There are alternative approaches that provide an adequate description of mechanisms but that are more sensitive to the issue of time. For example, see Anthony Chemero’s Radical Embodied Cognitive Science where he argues that we need ecological psychology to make sense of the mechanisms behind the dynamics. Typically dynamicists operate on a embodied/ecological perspective and don’t simply claim that the equations are the whole explanation (they are concerned with, say, neurons, bodies, the environment, etc). I think Bermudez is also confused about levels here. Presumably the mechanism level for cognition is the brain and its neurons, and perhaps the body and parts of the environment, and a computational account is an abstraction from those mechanisms just as much as a dynamical equation is. It’s common in computationalism to confuse and conflate identifying the brain as a computer with merely claiming that a computational approach gives an adequate descriptive account of some process is the brain. So, for example, I could argue that an algorithm gives an adequate description of a given brain process because it is not time sensitive and can therefore be described as a sequence of successive states without reference to its evolution in time. But that would not imply that the underlying mechanisms are computational, only that a computational description gives an adequate account.
But that would not imply that the underlying mechanisms are computational, only that a computational description gives an adequate account
Could you elaborate what you mean by this? Our most successful computational models of various cognitive systems at different levels of organization do remarkably well at predicting brain phenomena, to the point where we can simulate increasingly large cortical structures.
I read most of Van Gelder’s last article on dynamical cognitive systems before he switched to critical thinking and argument mapping research, in BBS, and I’m still not seeing why computationalism the and dynamical systems approach are incompatible. For example, Van Gelder says that a distinguishing feature of dynamical systems is it quantitative approach to states—but of course computationalism is often quantitative about states, too. Trappenberg must be confused, too, since his textbook on computational neuroscience talks several times about dynamical systems and doesn’t seem to be aware that they are somehow at odds with his program of computationalism.
Naively, it looks to me like the dynamical systems approach was largely a rection to early versions of the physical symbol system hypothesis and neural networks, but if you understand computationalism in the modern sense (which often includes models of time, quantitative state information, etc.) while still describing the system in terms of information processing, then there doesn’t seem to be much conflict between the two.
On our view, dynamical and [computational] explanation of the same complex system get at different but related features of said system described at different levels of abstraction and with different questions in mind. We see no a priori reason to claim that either kind of explanation is more fundamental than the other.
Van Gelder represents computationalism this way:
This is indeed a popular formulation of the computational theory of mind originally defended by Putnam and Fodor, but I’m not sure I’ve seen it endorsed in so many incorrect details by a major Less Wrong author. For example my post Neuroscience of Human Motivation disagrees with the above description on several points.
I’m not sure the implementation details are particularly relevant to his main argument though. The central concern is that computation is step-wise whereas dynamicism is continuous in time. So a computational approach, by definition, will break a task into a sequence of steps and these have an order but not an inherent time-scale. (It’s hard to see how an approach would be computationalist at all if this were not the case.) This has consequences for typical LessWrong theses. For example, speeding up the substrate for a computation has an obvious result: each step will be executed faster. If we have a computation consisting of three steps, S1 → S2 → S3, and each one takes 10 ms and we speed it up by a factor of 10 we’ll have a computation that executes in 3 ms instead of 30 ms. But if we have a dynamical equation describing the system this isn’t the case. I can speak of the system moving between states—say, S(t) → S(t+1) - but if we speed up the components involved by 10x (say, these are neural states, and we’re speeding up the neurons) I don’t get the same thing but faster, I get something else entirely. Perhaps the result would be greater sensitivity to shorter time-scales but given that the brain is likely temporally organised I’m inclined to think what I’d get would be a brain that doesn’t work at all.
See, this is why I make LW discussion posts asking where I need to say “oops.” :)
I encountered dynamical systems when I read my first cogsci textbook, and was probably too influenced by its take on dynamical systems. Here’s what Bermudez writes on pages 429-430:
...Dynamical systems models are perfectly compatible with information-processing models of cognition. Dynamical systems models operate at a higher level of abstraction. They allow cognitive scientists to abstract away from details of information-processing mechanisms in order to study how systems evolve over time. But even when we have a model of how a cognitive system evolves over time we will still need an account of what makes it possible for the system to evolve in those ways.
Bermudez illustrates his point by saying that dynamical systems theory can do a good job of modeling traffic jams, but this doesn’t mean we no longer have to think about internal combustion engines, gasoline, etc.
What do you think?
I think it’s essentially begging the question. Van Gelder is questioning whether there is computation going on at all, so to say that dynamical systems abstract away from the details of the information-processing mechanisms is obviously to assume that computation is going on. That might be a way somebody already committed to computationalism could look to incorporate dynamical systems theory but it’s not a response to Van Gelder. This is obvious from the traffic analogy. The dynamical account of traffic is obviously an abstraction from what actually happens (internal combustion engines, gasoline, etc). But the analogy only holds with cognitive science if you assume what actually happens in cognitive systems to be computation. What Van Gelder is doing is criticising computationalism for not be able to properly account for things that are critical to cognition (such as evolution in time). It’s not clear to me what it could mean to abstract away from computational models in order to study how systems evolve over time if those models do not themselves say anything about how they evolve over time. I think Van Gelder addresses this. It’s difficult to get an algorithmic model to be time-sensitive.
That said, whether the dynamical approach alone is adequate to capture everything about cognition is another matter. There are alternative approaches that provide an adequate description of mechanisms but that are more sensitive to the issue of time. For example, see Anthony Chemero’s Radical Embodied Cognitive Science where he argues that we need ecological psychology to make sense of the mechanisms behind the dynamics. Typically dynamicists operate on a embodied/ecological perspective and don’t simply claim that the equations are the whole explanation (they are concerned with, say, neurons, bodies, the environment, etc). I think Bermudez is also confused about levels here. Presumably the mechanism level for cognition is the brain and its neurons, and perhaps the body and parts of the environment, and a computational account is an abstraction from those mechanisms just as much as a dynamical equation is. It’s common in computationalism to confuse and conflate identifying the brain as a computer with merely claiming that a computational approach gives an adequate descriptive account of some process is the brain. So, for example, I could argue that an algorithm gives an adequate description of a given brain process because it is not time sensitive and can therefore be described as a sequence of successive states without reference to its evolution in time. But that would not imply that the underlying mechanisms are computational, only that a computational description gives an adequate account.
Could you elaborate what you mean by this? Our most successful computational models of various cognitive systems at different levels of organization do remarkably well at predicting brain phenomena, to the point where we can simulate increasingly large cortical structures.
I read most of Van Gelder’s last article on dynamical cognitive systems before he switched to critical thinking and argument mapping research, in BBS, and I’m still not seeing why computationalism the and dynamical systems approach are incompatible. For example, Van Gelder says that a distinguishing feature of dynamical systems is it quantitative approach to states—but of course computationalism is often quantitative about states, too. Trappenberg must be confused, too, since his textbook on computational neuroscience talks several times about dynamical systems and doesn’t seem to be aware that they are somehow at odds with his program of computationalism.
Naively, it looks to me like the dynamical systems approach was largely a rection to early versions of the physical symbol system hypothesis and neural networks, but if you understand computationalism in the modern sense (which often includes models of time, quantitative state information, etc.) while still describing the system in terms of information processing, then there doesn’t seem to be much conflict between the two.
Even Chemero agrees: