I’ve come across a lot of discussion recently about self-coercion, self-judgment, procrastination, shoulds, etc. Having just read it, I think this post is unusually good at offering a general framework applicable to many of these issues (i.e., that of the “moral brain” taking over). It’s also peppered with a lot of nice insights, such as why feeling guilty about procrastination is in fact moral licensing that enables procrastination.
While there are many parts of the posts that I quibble with (such as the idea of the “moral brain” as an invariant specialized module), this post is a great standalone introduction and explanation of a framework that I think is useful and important.
I’m curious what the objection to the “moral brain” term is. As used in this article, it’s mainly shorthand for a complex interaction of social learning, biases, specialized emotions, and prospect theory’s notion of a baseline expectation of what one “ought” to have or be able to get in a specific circumstance or in exchange for a specific cost. (Or conversely what some specific thing “ought” to cost.)
This statement for example: > Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!
I’m coming more from a predictive processing / bootstrap learning / constructed emotion paradigm in which your brain is very flexible about building high-level modules like moral judgment and punishment. The complex “moral brain” that you described is not etched into our hardware and it’s not universal, it’s learned. This means it can work quite differently or be absent in some people, and in others it can be deconstructed or redirected — “getting another job” as you’d say.
I agree that in practice lamenting the existence of your moral brain is a lot less useful than dissolving self-judgment case-by-case. But I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.
P.S. Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors. I recently told a woman about EA and she was outraged about young people taking it on themselves to save lives in Africa when billionaires and corporations exist who aren’t helping. It was a clear demonstration of how different people’s moral brains are.
Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors
Note that this is not evidence in favor of being able to unlearn judginess, unless you’re claiming you were previously at the opposite end of the spectrum, and then unlearned it somehow. If so, then I would love to know what you did, because it would be 100% awesome and I could do with being a lot less judgy myself, and would love a way to not have to pick off judgmental beliefs one at a time.
If you have something better than such one-off alterations, and it can be taught and used by persons other than yourself, in a practical timeframe, then such a thing would be commercially quite valuable.
I am aware of many self-help approaches for eliminating specific judgments. However, apart from long-term meditation, or a sudden enlightenment/brain tumor/stroke, I am not aware of any methods for globally “unlearning” the capacity for judginess. If you know how to do such a thing, please publish! You will be revolutionizing the field.
I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.
Define “it”. ;-)
the complex “moral brain” that you described
I think perhaps we’re talking past each other here, since I don’t see a “complex” moral brain, only several very simple things working together, in a possibly complex way. (Many of these things are also components shared by other functions, such as our purity-contamination system, or the “expected return calculation” system described by prospect theory and observed in various human and animal experiments.)
For example, we have emotions that bias us towards punishing things, but we can certainly learn when to feel that way. You can learn not to punish things, but this won’t remove the hardware support for the ability to feel that emotion. Both you and the woman you mentioned are capable of feeling outrage, even though you’ve learned different things to be outraged about. That animals raised in captivity, and pre-verbal human children can both be observed expressing outrage over perceived unfair treatment or reduced rewards without first needing an example to learn from is highly suggestive here as well.
I think it’s safe to say that these low-level elements—such as the existence of an emotions like moral outrage and moral disgust—are sufficiently universal as to imply hardware backing, despite the fact that the specific things that induce those emotions are culturally learned. AFAIK, they have universal facial expressions as found in even the most remote of tribes, which is strong evidence for hardware support for these emotions. (There are also established inbuilt biases for various types of moral learning, such as associations to purity, contamination, etc. -- see e.g. the writings of Haidt on this.)
Can you learn to route around these emotions or prevent them arising in the first place, to the point that it might seem you’re “unlearning” them? Well, I imagine that if you meditated long enough, you might be able to, as some people who meditate a lot become pretty nonjudgmental. But I don’t think that’s “unlearning” judgmental emotions, so much as creating pathways to inhibit one’s response to the emotion. The meditator still notices the emotion arising, but then refrains from responding to it.
That people can meditate for years and still not achieve such a state also seems to me like strong evidence for judgmental emotions as being the function of a piece of hardware that can’t just be turned off, only starved of stimulation or routed around. The literature around meditation likewise suggests that people have been trying for thousands of years to turn off attachment and judgment, with only limited success. If it were purely a software problem, I rather expect humanity would have figured something out by now.
I know this is from a few years ago, but I am not sure that I have much of this “moral brain”. I often tell people that we don’t “live in the land of should”, we must accept what is and move on from there. Perhaps my inability to generate strong emotions has something to do with it? Maybe the moral brain is something that is built-up during childhood while participating in various group dynamics both inside and outside of the family. I don’t recall ever feeling like I was truly part of any of the groups that I participated in. I have always felt like an outsider. I could understand the people that I was with, but I was never one of them.
These days I don’t recall ever thinking that someone “should” be doing or have done something. They either did or didn’t do them, and I either approve or don’t approve of those actions. I do use their past actions to inform my opinion of their likely future behavior. But I can’t change who they are. I can only change how I interact with them.
I’m not certain if I get stuck in judgement loops about myself. I am aware of a few traits that I don’t care for about myself, and that I use avoidance behaviors to distract myself from them. But I’m not sure if it’s the same thing. I don’t constantly punish myself when I am doing the avoidance behavior. When I have a problem that I don’t know how to solve, I generally put it aside for a while (usually a few days) then reexamine it later to determine if anything has changed.
FYI, I did experience a lot of stress/worry due to procrastination in middle school. I resolved afterwards that I would no longer worry. Worry was simply a waste of my time. Instead I would either do a thing, or not, but I would abstain from worrying. I think I essentially built in a short-circuit within my brain that intercepted any worry-related thoughts. Unfortunately, I have ADHD, and without stress/worry executive dysfunction can be difficult to overcome. Or perhaps that is me attempting to rationalize away non-productive activity at work.
I’ve come across a lot of discussion recently about self-coercion, self-judgment, procrastination, shoulds, etc. Having just read it, I think this post is unusually good at offering a general framework applicable to many of these issues (i.e., that of the “moral brain” taking over). It’s also peppered with a lot of nice insights, such as why feeling guilty about procrastination is in fact moral licensing that enables procrastination.
While there are many parts of the posts that I quibble with (such as the idea of the “moral brain” as an invariant specialized module), this post is a great standalone introduction and explanation of a framework that I think is useful and important.
I’m curious what the objection to the “moral brain” term is. As used in this article, it’s mainly shorthand for a complex interaction of social learning, biases, specialized emotions, and prospect theory’s notion of a baseline expectation of what one “ought” to have or be able to get in a specific circumstance or in exchange for a specific cost. (Or conversely what some specific thing “ought” to cost.)
This statement for example:
> Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!
I’m coming more from a predictive processing / bootstrap learning / constructed emotion paradigm in which your brain is very flexible about building high-level modules like moral judgment and punishment. The complex “moral brain” that you described is not etched into our hardware and it’s not universal, it’s learned. This means it can work quite differently or be absent in some people, and in others it can be deconstructed or redirected — “getting another job” as you’d say.
I agree that in practice lamenting the existence of your moral brain is a lot less useful than dissolving self-judgment case-by-case. But I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.
P.S.
Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors. I recently told a woman about EA and she was outraged about young people taking it on themselves to save lives in Africa when billionaires and corporations exist who aren’t helping. It was a clear demonstration of how different people’s moral brains are.
Note that this is not evidence in favor of being able to unlearn judginess, unless you’re claiming you were previously at the opposite end of the spectrum, and then unlearned it somehow. If so, then I would love to know what you did, because it would be 100% awesome and I could do with being a lot less judgy myself, and would love a way to not have to pick off judgmental beliefs one at a time.
If you have something better than such one-off alterations, and it can be taught and used by persons other than yourself, in a practical timeframe, then such a thing would be commercially quite valuable.
I am aware of many self-help approaches for eliminating specific judgments. However, apart from long-term meditation, or a sudden enlightenment/brain tumor/stroke, I am not aware of any methods for globally “unlearning” the capacity for judginess. If you know how to do such a thing, please publish! You will be revolutionizing the field.
Define “it”. ;-)
I think perhaps we’re talking past each other here, since I don’t see a “complex” moral brain, only several very simple things working together, in a possibly complex way. (Many of these things are also components shared by other functions, such as our purity-contamination system, or the “expected return calculation” system described by prospect theory and observed in various human and animal experiments.)
For example, we have emotions that bias us towards punishing things, but we can certainly learn when to feel that way. You can learn not to punish things, but this won’t remove the hardware support for the ability to feel that emotion. Both you and the woman you mentioned are capable of feeling outrage, even though you’ve learned different things to be outraged about. That animals raised in captivity, and pre-verbal human children can both be observed expressing outrage over perceived unfair treatment or reduced rewards without first needing an example to learn from is highly suggestive here as well.
I think it’s safe to say that these low-level elements—such as the existence of an emotions like moral outrage and moral disgust—are sufficiently universal as to imply hardware backing, despite the fact that the specific things that induce those emotions are culturally learned. AFAIK, they have universal facial expressions as found in even the most remote of tribes, which is strong evidence for hardware support for these emotions. (There are also established inbuilt biases for various types of moral learning, such as associations to purity, contamination, etc. -- see e.g. the writings of Haidt on this.)
Can you learn to route around these emotions or prevent them arising in the first place, to the point that it might seem you’re “unlearning” them? Well, I imagine that if you meditated long enough, you might be able to, as some people who meditate a lot become pretty nonjudgmental. But I don’t think that’s “unlearning” judgmental emotions, so much as creating pathways to inhibit one’s response to the emotion. The meditator still notices the emotion arising, but then refrains from responding to it.
That people can meditate for years and still not achieve such a state also seems to me like strong evidence for judgmental emotions as being the function of a piece of hardware that can’t just be turned off, only starved of stimulation or routed around. The literature around meditation likewise suggests that people have been trying for thousands of years to turn off attachment and judgment, with only limited success. If it were purely a software problem, I rather expect humanity would have figured something out by now.
I know this is from a few years ago, but I am not sure that I have much of this “moral brain”.
I often tell people that we don’t “live in the land of should”, we must accept what is and move on from there. Perhaps my inability to generate strong emotions has something to do with it?
Maybe the moral brain is something that is built-up during childhood while participating in various group dynamics both inside and outside of the family.
I don’t recall ever feeling like I was truly part of any of the groups that I participated in. I have always felt like an outsider. I could understand the people that I was with, but I was never one of them.
These days I don’t recall ever thinking that someone “should” be doing or have done something.
They either did or didn’t do them, and I either approve or don’t approve of those actions.
I do use their past actions to inform my opinion of their likely future behavior.
But I can’t change who they are.
I can only change how I interact with them.
I’m not certain if I get stuck in judgement loops about myself.
I am aware of a few traits that I don’t care for about myself, and that I use avoidance behaviors to distract myself from them. But I’m not sure if it’s the same thing.
I don’t constantly punish myself when I am doing the avoidance behavior.
When I have a problem that I don’t know how to solve, I generally put it aside for a while (usually a few days) then reexamine it later to determine if anything has changed.
FYI, I did experience a lot of stress/worry due to procrastination in middle school. I resolved afterwards that I would no longer worry. Worry was simply a waste of my time. Instead I would either do a thing, or not, but I would abstain from worrying. I think I essentially built in a short-circuit within my brain that intercepted any worry-related thoughts.
Unfortunately, I have ADHD, and without stress/worry executive dysfunction can be difficult to overcome. Or perhaps that is me attempting to rationalize away non-productive activity at work.