This processing is controlled by a few specialized nuclei in the hypothalamus and has relatively simple slow control loop dynamics which simply apply inverse control to a prediction error outside the ideal range.
Maybe I’m picking a fight over Active Inference and it’s going to lead into a pointless waste-of-time rabbit hole that I will immediately come to regret … but I really want to say that “a prediction error” is not involved in this specific thing.
For example, take the leptin—NPY/AgRP feedback connection.
As you probably know, when there are very few fat cells, they emit very little leptin into the bloodstream, and that lack of leptin increases the activity of the NPY/AgRP neurons in the arcuate nucleus (directly via the leptin receptors on those neurons), and then those NPY/AgRP neurons send various signals around the brain to make the animal want to eat, and to feel hungry, and to conserve energy, etc., which over time increases the number of fat cells (on the margin). Feedback control!
But I don’t see any prediction happening in this story. It’s a feedback loop, and it’s a control system, but where’s the prediction, and where is the setpoint, and where is the comparator subtracting them? I don’t think they’re present. So I don’t think there are any prediction errors involved here.
(I do think there are tons of bona fide prediction errors happening in other parts of the brain, like cortex, striatum, amygdala, and cerebellum.)
I also think this is mostly a semantic issue. The same process can be described in terms of implicit prediction errors where e.g. there is some baseline level of leptin in the bloodstream that the NPY/AgRP neurons in the arcuate nucleus ‘expect’ and then if there is less leptin this generates an implicit ‘prediction error’ in those neurons that cause them to increase firing which then stimulates various food-consuming reflexes and desires which ultimately leads to more food and hence ‘correcting’ the prediction error. It isn’t necessary that anywhere there are explicit ‘prediction error neurons’ encoding prediction errors although for larger systems it is often helpful to modularize it this way.
Ultimately, though I think it is more a conceptual question of how to think about control systems—is it best to think in terms of implicit prediction errors or just in terms of the feedback loop dynamics but it amounts to the same thing
Maybe I’m picking a fight over Active Inference and it’s going to lead into a pointless waste-of-time rabbit hole that I will immediately come to regret … but I really want to say that “a prediction error” is not involved in this specific thing.
For example, take the leptin—NPY/AgRP feedback connection.
As you probably know, when there are very few fat cells, they emit very little leptin into the bloodstream, and that lack of leptin increases the activity of the NPY/AgRP neurons in the arcuate nucleus (directly via the leptin receptors on those neurons), and then those NPY/AgRP neurons send various signals around the brain to make the animal want to eat, and to feel hungry, and to conserve energy, etc., which over time increases the number of fat cells (on the margin). Feedback control!
But I don’t see any prediction happening in this story. It’s a feedback loop, and it’s a control system, but where’s the prediction, and where is the setpoint, and where is the comparator subtracting them? I don’t think they’re present. So I don’t think there are any prediction errors involved here.
(I do think there are tons of bona fide prediction errors happening in other parts of the brain, like cortex, striatum, amygdala, and cerebellum.)
See my post here.
I also think this is mostly a semantic issue. The same process can be described in terms of implicit prediction errors where e.g. there is some baseline level of leptin in the bloodstream that the NPY/AgRP neurons in the arcuate nucleus ‘expect’ and then if there is less leptin this generates an implicit ‘prediction error’ in those neurons that cause them to increase firing which then stimulates various food-consuming reflexes and desires which ultimately leads to more food and hence ‘correcting’ the prediction error. It isn’t necessary that anywhere there are explicit ‘prediction error neurons’ encoding prediction errors although for larger systems it is often helpful to modularize it this way.
Ultimately, though I think it is more a conceptual question of how to think about control systems—is it best to think in terms of implicit prediction errors or just in terms of the feedback loop dynamics but it amounts to the same thing