Unfortunately, I’m not an expert in this field, so I can’t tell you what the state of the academic discussion looks like now. I get the impression that a number of psychologists have at least partly bought into the BCP paradigm (called Perceptual Control Theory) and have been working on their interests for decades, but it doesn’t seem to have swept the field.
At least on a superficial level, the model reminds me somewhat of the hierarchical prediction model, in that both postulate the brain to be composed of nested layers of controllers, each acting on the errors of the earlier layer. (I put together a brief summary of the paper here, though it was mainly intended as notes for myself so it’s not as clear as it could be.) Do you have a sense on how similar or different the models are?
Thanks for the paper! It was an interesting read and seems very relevant (and now I’ve got some reference chains to follow).
Do you have a sense on how similar or different the models are?
My impression is that if they describe someone as a cyberneticist, then they’re operating on a model that’s similar enough. First three sentences of the paper:
“The whole function of the brain is summed up in: error correction.” So wrote W. Ross Ashby, the British psychiatrist and cyberneticist, some half a century ago. Computational neuroscience has come a very long way since then. There is now increasing reason to believe that Ashby’s (admittedly somewhat vague) statement is correct, and that it captures something crucial about the way that spending metabolic money to build complex brains pays dividends in the search for adaptive success.
From my read of the rest of paper, the similarities go deep. Control theory is explicitly discussed in this section:
A closely related body of work in so-called optimal feedback control theory (e.g., Todorov 2009; Todorov & Jordan 2002) displays the motor control problem as mathematically equivalent to Bayesian inference. Very roughly – see Todorov (2009) for a detailed account – you treat the desired (goal) state as observed and perform Bayesian inference to find the actions that get you there. This mapping between perception and action emerges also in some recent work on planning (e.g., Toussaint 2009). The idea, closely related to these approaches to simple movement control, is that in planning we imagine a future goal state as actual, then use Bayesian inference to find the set of intermediate states (which can now themselves be whole actions) that get us there. There is thus emerging a fundamentally unified set of computational models which, as Toussaint (2009, p. 29) comments, “does not distinguish between the problems of sensor processing, motor control, or planning.” Toussaint’s bold claim is modified, however, by the important caveat (op. cit., p. 29) that we must, in practice, deploy approximations and representations that are specialized for different tasks. But at the very least, it now seems likely that perception and action are in some deep sense computational siblings and that:
The best ways of interpreting incoming information via perception, are deeply the same as the best ways of controlling outgoing information via motor action … so the notion that there are a few specifiable computational principles governing neural function seems plausible. (Eliasmith 2007, p. 380)
Action-oriented predictive processing goes further, however, in suggesting that motor intentions actively elicit, via their unfolding into detailed motor actions, the ongoing streams of sensory (especially proprioceptive) results that our brains predict. This deep unity between perception and action emerges most clearly in the context of so-called active inference, where the agent moves its sensors in ways that amount to actively seeking or generating the sensory consequences that they (or rather, their brains) expect (see Friston 2009; Friston et al. 2010). Perception, cognition, and action – if this unifying perspective proves correct – work closely together to minimize sensory prediction errors by selectively sampling, and actively sculpting, the stimulus array. They thus conspire to move a creature through time and space in ways that fulfil an ever-changing and deeply inter-animating set of (sub-personal) expectations. According to these accounts, then:
Perceptual learning and inference is necessary to induce prior expectations about how the sensorium unfolds. Action is engaged to resample the world to fulfil these expectations. This places perception and action in intimate relation and accounts for both with the same principle. (Friston et al. 2009, p. 12)
Basically, it looks like their view fits in with the hierarchical controls view and possibly adds burdensome details (in the sense that they believe the reference values take on a specific form that the hierarchical control theory view allows but does not require).
At least on a superficial level, the model reminds me somewhat of the hierarchical prediction model, in that both postulate the brain to be composed of nested layers of controllers, each acting on the errors of the earlier layer. (I put together a brief summary of the paper here, though it was mainly intended as notes for myself so it’s not as clear as it could be.) Do you have a sense on how similar or different the models are?
Thanks for the paper! It was an interesting read and seems very relevant (and now I’ve got some reference chains to follow).
My impression is that if they describe someone as a cyberneticist, then they’re operating on a model that’s similar enough. First three sentences of the paper:
From my read of the rest of paper, the similarities go deep. Control theory is explicitly discussed in this section:
Basically, it looks like their view fits in with the hierarchical controls view and possibly adds burdensome details (in the sense that they believe the reference values take on a specific form that the hierarchical control theory view allows but does not require).