In this post and the previous one you linked to, you do a good job explaining why your criterion e is possible / not ruled out by the data. But can you explain more about what makes you think it’s true?
Maybe the reason for (e) would be more clear if you replace “hypothesis” with “possible course of action”. Then (e) is the thing that makes us more likely to eat when we’re hungry, etc.
(“Course of action” is just a special case of what I call “hypothesis”. “Hypothesis” is synonymous with “One possible set of top-down predictions”.)
I don’t think I’m departing from “Surfing Uncertainty” etc. in any big way in that previous post, but I felt that the predictive coding folks don’t adequately discuss how the specific hypotheses / predictions are actually calculated in the brain. I might have been channeling the Jeff Hawkins 2004 book a bit to fill in some gaps, but it’s mainly my take.
I guess I should contextualize something in my previous post: I think anyone who advocates predictive coding is obligated to discuss The Wishful Thinking Problem. It’s not something specific to my little (a-e) diagram. So here is The Wishful Thinking Problem, stripped away from the rest of what I wrote:
Wishful thinking problem:If we’re hungry, we have a high-level prior that we’re going to eat. Well, that prior privileges predictions that we’ll go to a restaurant, which is sensible… but that prior also privileges predictions that food will magically appear in our mouths, which is wishful thinking. We don’t actually believe the latter. So that’s The Wishful Thinking Problem.
The Wishful Thinking Problem is not a big problem!! It has an obvious solution: Our prior that “magic doesn’t happen” is stronger than our prior that “we’re going to eat”. Thus, we don’t expect food to magically appear in our mouth after all! Problem solved! That’s all I was saying in that part of the previous post. Sorry if I made it sound overly profound or complicated.
I like Friston’s attempt to unify these by saying that bad mood is just a claim that you’re in an unpredictable environment
ETA: I think I have a better understanding of emotions now than I did when I wrote this comment; see Inner Alignment in the Brain
I encourage you to think about it more computationally! The amygdala has a circuit that takes data, does some calculation, and decides on that basis whether to emit a feeling of disgust. And it has another circuit that takes data, does some calculation, and decides whether to emit a feeling of sadness. And so on for boredom and fear and jealousy and every other emotion. Each of these circuits is specifically designed by evolution to emit its feeling in the appropriate circumstances.
So pretend that you’re Evolution, designing the sadness circuit. What are you trying to calculate? I think the short answer is:
Sadness circuit design goal:Emit a feeling of sadness when: My prospects are grim, and I have no idea how to make things better.
Why is this the goal, as opposed to something else? Because this is the appropriate situation to cry for help and rethink all your life plans.
OK, so if that’s the design goal, then how do you actually build a circuit in the amygdala to do that? Keep in mind that this circuit is not allowed to directly make reference to our understanding of the world, because “our understanding of the world” is an inscrutable pattern of neural activity in a massive, convoluted, learned data structure in the cortex, whereas the emotion circuits need to have specific, genetically-predetermined neuron writing. So what can you do instead? Well, you can design the circuit such that it listens for the cortex to predict rewarding things to happen (the amygdala does have easy access to this information), and to not feel sadness when that signal is occurring regularly. After all, that signal is typically a sign that we are imagining a bright future. This circuit won’t perfectly match the design goal, but it’s as close as Evolution can get.
(By contrast, the algorithm “check whether you’re in an unpredictable environment” doesn’t seem to fit, to me. Reading a confusing book is frustrating, not saddening. Getting locked in jail for life is saddening but offers predictability.)
So anyway, my speculation here is that:
(1) a lot of the input data for the amygdala’s various emotion calculation circuits comes from the cortex (duh),
(2) the neural mechanism controlling the strength of predictions also controls the strength of signals from the cortex to the amygdala (I think this is a natural consequence of the predictive coding framework, although to be clear, I’m speculating),
(3) a global reduction of the strength of signals going from the cortex to the amygdala affects pretty much all of the emotion circuits, and it turns out that the result is sadness and other negative feelings (this is my pure speculation, although it seems to fit the sadness algorithm example above). I don’t think there’s any particularly deep reason that globally weaker signals from the cortex to the amygdala creates sadness rather than happiness. I think it just comes out of details about how the various emotion circuits are implemented, and interact.
(The claim “depression involves global weakening of signals going from cortex to amygdala” seems like it would be pretty easy to test, if I had a psych lab. First try to elicit disgust in a way that bypasses the cortex, like smelling something gross. Then try to elicit disgust in a way that requires data to pass from the cortex to the amygdala, like remembering or imagining something gross. [Seeing something gross can be in either category, I think.] My prediction is that in the case that doesn’t involve cortex, you’ll get the same disgust reaction for depressed vs control; and in the case that does involve cortex, depressed people will have a weaker disgust reaction, proportional to the severity of the depression.)
the best fits (washed-out visual field and psychomotor retardation) are really marginal symptoms of depression that you only find in a few of the worst cases
I guess that counts against this blog post, but I don’t think it quite falsifies it. Instead I can claim that motor control works normally if the cortical control signals are above some threshold. So the signals can get somewhat weaker without creating a noticeable effect, but if they get severely weaker, it starts butting against the threshold and starts to show. (The motor control signals do, after all, get further processed by the cerebellum etc.; they’re not literally controlling muscles themselves.) Ditto for washed-out visual field; the appearance of a thing you’re staring at is normally a super strong prediction, maybe it can get somewhat weaker without creating a noticeable effect. Whereas maybe the amygdala is more sensitive to relatively small changes in signal levels, for whatever reason. (This paragraph might be special pleading, I’m not sure.)
There are two perspectives. One is “Let’s ignore the worst of the worst cases, their brains might be off-kilter in all kinds of ways!” The other is “Let’s especially look at the worst of the worst cases, because instead of trying to squint at subtle changes of brain function, the effects will be turned up to 11! They’ll be blindingly obvious!”
I’m not sure what direction all of this happens in.
I think it’s gotta be a vicious cycle, otherwise it wouldn’t persist, right? OK how about this: “Globally weaker predictions cause sadness, and sadness causes globally weaker predictions”.
I already talked about the first part. But why might sadness cause globally weaker predictions? Well, one evolutionary goal of sadness is to make us less attached to our current long-term plans, since those plans apparently aren’t working out for us! (Remember the sadness circuit design goal I wrote above.) Globally weaker predictions would do that, right? As you weaken the prediction, “active plans” turn into “possible plans”, then into “vague musings”...
Anyway, maybe that vicious cycle dynamic is always present to some extent, but other processes push in other directions and keep our emotions stable. …Until a biochemical insult—or an unusually prolonged bout of “normal” sadness (e.g. from grief)—tips the system out of balance, and we get sucked into that vortex of mutually reinforcing “sadness + globally weak predictions”.
Thanks for the comment!
Maybe the reason for (e) would be more clear if you replace “hypothesis” with “possible course of action”. Then (e) is the thing that makes us more likely to eat when we’re hungry, etc.
(“Course of action” is just a special case of what I call “hypothesis”. “Hypothesis” is synonymous with “One possible set of top-down predictions”.)
I don’t think I’m departing from “Surfing Uncertainty” etc. in any big way in that previous post, but I felt that the predictive coding folks don’t adequately discuss how the specific hypotheses / predictions are actually calculated in the brain. I might have been channeling the Jeff Hawkins 2004 book a bit to fill in some gaps, but it’s mainly my take.
I guess I should contextualize something in my previous post: I think anyone who advocates predictive coding is obligated to discuss The Wishful Thinking Problem. It’s not something specific to my little (a-e) diagram. So here is The Wishful Thinking Problem, stripped away from the rest of what I wrote:
Wishful thinking problem: If we’re hungry, we have a high-level prior that we’re going to eat. Well, that prior privileges predictions that we’ll go to a restaurant, which is sensible… but that prior also privileges predictions that food will magically appear in our mouths, which is wishful thinking. We don’t actually believe the latter. So that’s The Wishful Thinking Problem.
The Wishful Thinking Problem is not a big problem!! It has an obvious solution: Our prior that “magic doesn’t happen” is stronger than our prior that “we’re going to eat”. Thus, we don’t expect food to magically appear in our mouth after all! Problem solved! That’s all I was saying in that part of the previous post. Sorry if I made it sound overly profound or complicated.
ETA: I think I have a better understanding of emotions now than I did when I wrote this comment; see Inner Alignment in the Brain
I encourage you to think about it more computationally! The amygdala has a circuit that takes data, does some calculation, and decides on that basis whether to emit a feeling of disgust. And it has another circuit that takes data, does some calculation, and decides whether to emit a feeling of sadness. And so on for boredom and fear and jealousy and every other emotion. Each of these circuits is specifically designed by evolution to emit its feeling in the appropriate circumstances.
So pretend that you’re Evolution, designing the sadness circuit. What are you trying to calculate? I think the short answer is:
Sadness circuit design goal: Emit a feeling of sadness when: My prospects are grim, and I have no idea how to make things better.
Why is this the goal, as opposed to something else? Because this is the appropriate situation to cry for help and rethink all your life plans.
OK, so if that’s the design goal, then how do you actually build a circuit in the amygdala to do that? Keep in mind that this circuit is not allowed to directly make reference to our understanding of the world, because “our understanding of the world” is an inscrutable pattern of neural activity in a massive, convoluted, learned data structure in the cortex, whereas the emotion circuits need to have specific, genetically-predetermined neuron writing. So what can you do instead? Well, you can design the circuit such that it listens for the cortex to predict rewarding things to happen (the amygdala does have easy access to this information), and to not feel sadness when that signal is occurring regularly. After all, that signal is typically a sign that we are imagining a bright future. This circuit won’t perfectly match the design goal, but it’s as close as Evolution can get.
(By contrast, the algorithm “check whether you’re in an unpredictable environment” doesn’t seem to fit, to me. Reading a confusing book is frustrating, not saddening. Getting locked in jail for life is saddening but offers predictability.)
So anyway, my speculation here is that:
(1) a lot of the input data for the amygdala’s various emotion calculation circuits comes from the cortex (duh),
(2) the neural mechanism controlling the strength of predictions also controls the strength of signals from the cortex to the amygdala (I think this is a natural consequence of the predictive coding framework, although to be clear, I’m speculating),
(3) a global reduction of the strength of signals going from the cortex to the amygdala affects pretty much all of the emotion circuits, and it turns out that the result is sadness and other negative feelings (this is my pure speculation, although it seems to fit the sadness algorithm example above). I don’t think there’s any particularly deep reason that globally weaker signals from the cortex to the amygdala creates sadness rather than happiness. I think it just comes out of details about how the various emotion circuits are implemented, and interact.
(The claim “depression involves global weakening of signals going from cortex to amygdala” seems like it would be pretty easy to test, if I had a psych lab. First try to elicit disgust in a way that bypasses the cortex, like smelling something gross. Then try to elicit disgust in a way that requires data to pass from the cortex to the amygdala, like remembering or imagining something gross. [Seeing something gross can be in either category, I think.] My prediction is that in the case that doesn’t involve cortex, you’ll get the same disgust reaction for depressed vs control; and in the case that does involve cortex, depressed people will have a weaker disgust reaction, proportional to the severity of the depression.)
I guess that counts against this blog post, but I don’t think it quite falsifies it. Instead I can claim that motor control works normally if the cortical control signals are above some threshold. So the signals can get somewhat weaker without creating a noticeable effect, but if they get severely weaker, it starts butting against the threshold and starts to show. (The motor control signals do, after all, get further processed by the cerebellum etc.; they’re not literally controlling muscles themselves.) Ditto for washed-out visual field; the appearance of a thing you’re staring at is normally a super strong prediction, maybe it can get somewhat weaker without creating a noticeable effect. Whereas maybe the amygdala is more sensitive to relatively small changes in signal levels, for whatever reason. (This paragraph might be special pleading, I’m not sure.)
There are two perspectives. One is “Let’s ignore the worst of the worst cases, their brains might be off-kilter in all kinds of ways!” The other is “Let’s especially look at the worst of the worst cases, because instead of trying to squint at subtle changes of brain function, the effects will be turned up to 11! They’ll be blindingly obvious!”
I think it’s gotta be a vicious cycle, otherwise it wouldn’t persist, right? OK how about this: “Globally weaker predictions cause sadness, and sadness causes globally weaker predictions”.
I already talked about the first part. But why might sadness cause globally weaker predictions? Well, one evolutionary goal of sadness is to make us less attached to our current long-term plans, since those plans apparently aren’t working out for us! (Remember the sadness circuit design goal I wrote above.) Globally weaker predictions would do that, right? As you weaken the prediction, “active plans” turn into “possible plans”, then into “vague musings”...
Anyway, maybe that vicious cycle dynamic is always present to some extent, but other processes push in other directions and keep our emotions stable. …Until a biochemical insult—or an unusually prolonged bout of “normal” sadness (e.g. from grief)—tips the system out of balance, and we get sucked into that vortex of mutually reinforcing “sadness + globally weak predictions”.