The Curse Of The Counterfactual
The Introduction
The Curse of the Counterfactual is a side-effect of the way our brains process is-ought distinctions. It causes our brains to compare our past, present, and future to various counterfactual imaginings, and then blame and punish ourselves for the difference between reality, and whatever we just made up to replace it.
Seen from the outside, this process manifests itself as stress, anxiety, procrastination, perfectionism, creative blocks, loss of motivation, inability to let go of the past, constant starting and stopping on one goal or frequent switching between goals, low self-esteem and many other things. From the inside, however, these counterfactuals can feel more real to us than reality itself, which can make it difficult to even notice it’s happening, let alone being able to stop it.
Unfortunately, even though each specific instance of the curse can be defused using relatively simple techniques, we can’t just remove the parts of our brain that generate new instances of the problem. Which means that you can’t sidestep the Curse by imagining yet another counterfactual world: one in which you believe you ought to be able to avoid falling into its trap, just by being smarter or more virtuous!
Using examples derived from my client work, this article will show how the Curse operates, and the bare bones of some approaches to rectifying it, with links to further learning materials. (Case descriptions are anonymized and/or composites; i.e., the names are not real, and identifying details have been changed.)
The Disclaimer
To avoid confusion between object-level advice, and the meta-level issue of “how our moral judgment frames interfere with rational thinking”, I have intentionally omitted any description of how the fictionalized or composite clients actually solved the real-life problems implied by their stories. The examples in this article do not promote or recommend any specific object-level solutions for even those clients’ actual specific problems, let alone universal advice for people in similar situations.
So, if you have the impression that I am recommending, for example, specific ways to deal with career or relationship issues, you are extrapolating something that is not actually here: this article is strictly about how the interaction between counterfactuals and moral judgment interferes with our practical thinking processes, not about what conclusions people draw once their ability to think practically has been restored.
The Stories
The Wish
Carlos is telling me about his childhood. His father was very strict, imposing cruel and sadistic punishments for the most minor offences. Years ago, the punishments stopped, but Carlos is still upset. His father should not have done those things, he says. “He should have loved me more.”
“Is that true?” I ask. “If you compare the statement ‘He should have loved me more’, to what actually happened, what do you feel?”
Carlos is hesitant, confused.
I explain further. “Is the truth that he should have loved you more? Or is it only true that you wish he loved you more?”
Try these two statements on for size, I tell him. How do they feel? Which one is better? Which one is true?
My father should have loved me more
I wish my father had loved me more
The first feels angry. Resentful. He feels like a victim, helpless. There is nothing he can do.
The second one, when he tries it, feels different. It is sad, because what he wishes did not come to pass. At the same time, it is wistful, because he is experiencing a glimpse of what it would be like, if his father had loved him more. And then the feeling of sadness passes. There is grief, but then it’s over.
When he looks back on the past, it’s now just a memory. He still wishes that things were different, still feels wistful… but he’s no longer a victim, at least in that particular way.
The Principle
Sara is telling me about a professional conference she recently attended. As part of a group exercise, she tried hard to persuade her group to adopt her plan for their presentation, and was met with dismissal and obstruction.
She is angry at herself for not being better at persuading them. She should’ve been less stubborn, she says. Should have listened more, tried to understand their points of view beforehand, so she could be more persuasive later. If they understood what she had to offer, she thinks, they would have used her ideas.
I explain to Sara that she is suffering from the Curse of the Counterfactual: the brain’s tendency to attach moral weight to the things that we imagine could have gone differently, or how we believe things ought to be.
She is suffering from a more complex version of the Curse than Carlos, but the result is similar. She feels angry at herself, not her father. And she feels at fault for her perceived failings, because her brain is literally punishing her for what she failed to do, with guilt and self-directed anger.
“Compare your experience of the event with what you think should have happened. Is it true that you should have been less stubborn? Or do you only wish you had been less stubborn?”
Sara fights the question more than Carlos. Less experienced in emotional reflection, she retreats to a logical argument, saying that it’s definitely true she should have been less stubborn, because that would have produced better results.
Her brain, I explain, is in a loop. On the one hand, she knows the facts of what actually happened. She admits that she did not actually do any of the things she thinks she should have. But her brain persists in arguing that reality is wrong. Her brain is telling her it should not have happened the way it did, and that (in effect), the fact that it did happen that way must be punished.
I explain to Sara that our brain has special machinery devoted to punishing. It makes us feel anger or disgust when we perceive our standards – or our tribe’s standards – being violated. And it generally doesn’t stop, until or unless the violator is sufficiently punished, or repents.
But reality cannot be punished. And it certainly can’t ever repent! So every time she thinks of what did happen, her brain keeps on punishing her, telling her that it should not have happened the way that it did.
“Have you ever heard the phrase, ‘it’s the principle of the thing’?” I ask. “People go to ridiculous lengths when a principle is at stake, because our brains want to make it costly for others to cross us. The problem is, when we apply ‘principles’ to reality, the only person who gets punished is us.”
The Punishment Myth
Ingvar is having trouble getting his work done. He believes he should be able to knock it out in an afternoon, but he doesn’t. He surfs the internet, feeling guilty the entire time, because he should be working.
Ingvar is experienced at Focusing and IFS, so he has better access to his felt sense than Sara or Carlos, and we rapidly dismantle the moral belief that he is a bad person whenever he is not working. Afterwards, we test his prior thought that he should be able to get his work done in a certain amount of time, and he spontaneously begins talking about strategies for getting it done more easily and quickly. He no longer feels stuck about doing the work, the way he did before.
I explain to him that this is because the brain has different machinery for different types of motivation. Our moral judgment system does not motivate us to actually accomplish anything. All it can do is motivate us to punish or protest, to rage and repent. “After all,” I say. “Punishment doesn’t actually change your behavior in any meaningful way. When you weren’t doing your work, you punished yourself constantly while surfing the internet. But you never actually stopped.”
“In fact, while you were punishing yourself, you got to feel good about yourself, because punishing meant you cared. You weren’t some bad person who wouldn’t even feel guilty about not working. So your self-punishment actually gave you a moral license to continue as you were.”
“Yeah,” Ingvar says. “You’re right. I felt guilty, but also, better. If I hadn’t been punishing myself it would have been worse, because I would’ve felt like a bad person.”
“Exactly. Exercising moral judgment makes us feel good and righteous, because our brain wants to reward us for punishing violators. But because it works this way, it hijacks our actual motivation to accomplish anything. The act of punishing feels like we’re accomplishing something, so we don’t feel like doing anything else.
“In addition, while all that is going on, our brain’s creative, problem-solving modules are idle. That’s why you were stuck before. The ideas you’re coming up with now, for how to do the work, are not things you thought of before. Some of them, I thought of when you first told me about your problem. But I didn’t mention any of them, because from where you were before, you would have said, “yeah, but...” to them. Am I right?”
Ingvar admits that this, too, is true. The very same ideas he is coming up with now to get his work done, would have felt irrelevant, useless, or even insulting had someone suggested them to him thirty minutes ago… because they wouldn’t have helped him punish anybody!
The “Nice Guy” Paradigm
I’m explaining the same thing to Sara. She’s protesting that if she doesn’t think she should be less stubborn, then how will she ever change it?
“What we want and what we think we should are two different things. If it truly would be better to be less stubborn, if that’s something you want, then not having the ‘should’ actually makes it easier.
“But what your brain is doing right now is not wanting to be better. Rather, your brain is trying to cancel out a loss.
“Right now, you are imagining a way that you would prefer things to have happened at the conference. But the fact that it didn’t happen that way is painful, because the way things actually went is not as good as what you imagined or hoped for. But if you say to yourself that you should have acted differently, then it allows your brain to preserve hope.
“If you believe you should have acted differently, then you can continue to believe that they would have accepted your ideas, if only you had been better at convincing them. It’s like holding on to a bad investment and not selling it, so you don’t have to acknowledge the loss in your mental bookkeeping.”
The specific variant of the Counterfactual Curse that Sara is experiencing is the “Nice Guy Paradigm”. (Which, despite the name, is not actually gender-specific; it’s actually from a book called “No More Mr. Nice Guy”, about becoming assertive instead of people-pleasing.)
The Nice Guy Paradigm is any belief of the form, “If I were X enough, then [other people / reality / I] would [do / be / have] Y.” (In the book’s original formulation, this was expressed more concretely as, “IF I can hide my flaws and become what I think others want me to be, THEN I will be loved, get my needs met, and have a problem-free life.”)
In Sara’s case, she believes that if only she were good enough at persuasion, not being stubborn, etc. then people would understand and accept her ideas (and respect and appreciate her).
The upside of this belief is that it allows her to continue hoping that someday, maybe she will be good enough at these things, so eventually she will get the respect and acknowledgment she deserves and desires. (Which helps her feel less bad about the fact that today, she is not getting those things.)
The downside of this belief, though, is that since what other people do is not 100% within her control, she could be the world’s best persuader and still get let down sometimes. And because this belief runs backwards as well as forwards, then when other people don’t acknowledge or respect her, she will still feel that it is all her fault. (And it will never cross her mind that some people might just be dicks or just plain unwilling to understand or accept her or her ideas, no matter how good she or those ideas may be.)
The Bad News
Another client, Victor, is excited. I’ve just explained the curse of the counterfactual as it relates to his problem. “So I should just stop using ‘should’ and everything will be better?”
“No, sorry. It doesn’t work that way. The part of your brain that ties counterfactual imaginings to moral judgment isn’t going to go away by us wishing it would. We can remove the links from the activities and situations that trigger the “shoulds”, and we can specifically question the truth of individual “shoulds” to get free of them. But it is not an intellectual exercise. It’s an experiential one.
“To put it another way, your moral judgment system can be persuaded that it made a mistake about whether to punish this one thing in particular, but it cannot be persuaded that it’s a mistake to punish things in general. (Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!)”
I tell him about the time I first found out about the Curse and how to fix individual instances of it, and how I, too, thought that I “should” be able to “just stop using shoulds”. (And I’m not proud to say it took me years to fully realize the inherent meta-contradiction taking place there!)
I tell Victor about a book on the process we’ve just used to tackle one of his problems, and mention that there’s a chapter in it devoted to a session where the issue somebody wants to work on is this very one: the fact that they think they should be able to fix all their problems without having to individually address each and every “should” they have.
Victor laughs once he sees the “meta” of it, the inherent contradiction that nonetheless took me years to beat into my own skull. “So I should probably work on that first, yeah?”
Probably so, Victor. Probably so.
The Theory
The Bias
Byron Katie has a wonderful term we can use to name an instance of the Curse. She calls it “an argument with reality”. Because our brain is arguing that, because it can imagine something better than whatever actually happened, then, in some vaguely “moral” sense, that better thing ought to have happened instead.
But, since that better thing didn’t happen, that clearly means reality is wrong, and someone must therefore be punished.
(Maybe you!)
But reality, no matter how repugnant it may be (morally or otherwise), and no matter how much we want to punish it, is still reality.
And as Byron Katie puts it, “When I argue with reality, I lose… but only 100% of the time!”
Now, to our moral brains, this statement may itself seem morally wrong. “How dare you!” our brains may say. “How dare you imply that we should forgive/accept/approve historical atrocity X!”
How dare you tell us to accept the existence of suffering, death, imperfection?
It is important to understand that this is an illusion, a bias. When activated, the moral brain acts as though the only thing motivating anyone is proper punishment and disapproval. It makes us feel that, if we fail to be sufficiently outraged, then nothing will ever happen. Justice will never be done.
And it does this to us, because, for the good of the tribe – that is to say, the good of our genes! – we must be motivated to not only punish the wrongdoers, we must also be motivated to punish the non-punishers.
So when you first consider the possibility of accepting reality, over your moral brain’s objections, it will feel like you are arguing for the collapse of civilization, and the abandonment of everything you hold dear.
Do not believe this.
The Difference
You can want to end death, disease, and suffering, without rejecting the reality of death, disease and suffering.
Moral judgment and preferences are two entirely different and separate things. And when moral judgment is involved, trade-offs become taboo.
When Ingvar was procrastinating, and felt he should do his work faster, his brain spent absolutely zero time considering how he might get it done at all, let alone how he might do it faster.
Why? Because to the moral mind, the reasons he is not getting it done do not matter. Only punishing the evildoer matters, so even if someone suggested ways he could make things easier, his moral brain rejects them as irrelevant to the real problem, which is clearly his moral failing. Talking or thinking about problems or solutions isn’t really “working”, therefore it’s further evidence of his failing. And making the work easier would be lessening his rightful punishment!
So when moral judgment is involved, actually reasoning about things feels wrong. Because reasoning might lead to a compromise with the Great Evil: a lessening of punishment or a toleration of non-punishers.
This is only an illusion, albeit a very persistent one.
The truth is that, when you switch off moral judgment, preference remains. Most of us, given a choice, actually prefer that good things happen, that we actually act in ways that are good and kind and righteous, that are not about fighting Evil, but simply making more of whatever we actually want to see in the world.
And ironically, we are more motivated to actually produce these results, when we do so from preference than from outrage. We can be creative, we can plan, or we can even compromise and adjust our plans to work with reality as it is, rather than as we would prefer it to be.
After all, when we think that something is how the world should be, it gives us no real motivation to change it. We are motivated instead to protest and punish the state of the world, or to “speak out” against those we believe responsible… and then feel like we just accomplished something by doing so!
And so we end up just like Ingvar, surfing the net and punishing himself, but never actually working… nor even choosing not to work and to do something more rewarding instead.
The Way Out
The Methods
There are many methods we can use to combat the curse of the counterfactual.
For example, the Litany of Gendlin tells us that admitting to reality cannot make it worse, because whatever is happening, we are already enduring it. (It just doesn’t feel that way, while the mind is still clinging to its counterfactuals, as if it were a corporate executive putting off writing down a bad investment, so as not to affect the shareholders’ annual report!)
We can also use the Litany of Tarski, and tell ourselves that if we live in a world where the counterfactual is true, then we need to know that, but conversely, if we live in a world where it is not true, then we need just as much to know that, too.
These litanies, however, are more of a reminder that points to a thing, than the actual thing itself. They remind us and prompt us to wrestle with the truth (or our idea of it), but they aren’t a substitute for actually doing so.
So the primary technique I use and teach for actually engaging with the brain’s moral judgment system (and then switching it off), is a variation on The Work of Byron Katie.
The Work is a process that in its simplest form consists of a few questions that, when asked in the right way, can gently lead our brain to notice that 1) our counterfactuals are not reality, 2) thinking they are reality is painful, and 3) maybe it would feel better if we didn’t think that way any more. A little ditty describing the process goes, “Judge your neighbor, write it down; ask four questions, turn it around.”
The reason it begins with “judge your neighbor” is that the technique was originally created to deal with external moral judgments about what other people should or shouldn’t do. (Like, “my father should have loved me more”.) The technique is a little easier to use on such judgments, presumably because our moral system is more oriented towards judging other people than abstract concepts. (So using it on judgments of yourself can be a good bit more challenging if you haven’t first practiced it in the way it was intended to be used.)
In this article, I am not going to get into much detail on the process, as there are free downloads at Byron Katie’s website, and she has two excellent books (Loving What Is, and I Need Your Love: Is That True?) containing transcript after transcript of people doing the process on a wide variety of beliefs, as well as additional exercises for discovering one’s judgments in the first place. Instead, I want to share the unique variations and caveats that I have learned and refined to both make the process itself clearer, and to make it easier to teach to others, especially people who are more systematically-minded and less “woo” than average.
(Note: some of Byron Katie’s books discuss sensitive topics including rape, child abuse, war atrocities, and more. In addition, some of this discussion includes having victims question their belief that such things “should not” have happened or the belief it was not their fault. And based on some reviews I’ve seen online, this is apparently even more triggering for some people than hearing about the actual events, once their moral outrage kicks in.)
The Tests
One of the biggest challenges in learning self-help techniques (or rationality techniques, for that matter), is not knowing how something is supposed to feel from the inside. We can hear people telling us to believe in ourselves, to let go and accept things, or whatever, but unless we have a way to know what these things are like, we cannot know if we’re making progress at actually doing them.
For this reason, one of the most important things I do as a mindhacking instructor is to develop tests that one can apply to one’s experience, to know if a technique is being correctly applied.
For the Work of Byron Katie, there are two primary tests that I use and teach, for the first and fourth questions, to know if you are asking the questions correctly, or actually paying attention to your answers.
The first question of the Work is simply, “Is that true?” But it’s not looking for what your reasoning says, because in the presence of moral judgment, all reasoning is motivated reasoning. (Like Sara arguing that it’s true she should’ve been a certain way, because it would have made things better… because things would have been better if she’d done things a certain way.)
Instead, the real question we are asking is something more like, “if you reflect on your experience of what has happened/been happening in reality, is it actually consistent with the way you’re insisting it’s supposed to be?”
And the most important part of that question is not “is it consistent?” but “if you reflect on your experience.” The thing that actually produces a loosening of your moral judgment is not your reasoning about the facts, but the process of inquiring into your experience of them, and your inward reflection on what that means.
This distinction is why the Work is easier for those who have easy access to their inner experience, a skill honed by Focusing, IFS, and various other therapeutic or self-help modalities. But even though those people have the ability to access their inner experience, that doesn’t necessarily mean they will actually do so. When contemplating this question, we are all tempted sometimes to deflect, to distract, to deny the very possibility of, rather than actually investigating.
Because of this, Work facilitators are trained to reject answers to this question other than “yes”, or “no”, because from the outside, this is the primary “tell” that lets them know whether you’re doing this process correctly. If you are answering with something other than “yes” or “no” – for example, if you begin some kind of explanation or story or justification – they immediately know you aren’t reflecting on your experience, but providing reasons not to, or creating distractions so you won’t have to.
Unfortunately, while requiring an answer of “yes or no” keeps a facilitator from being sidetracked by your reasoning and distractions, it doesn’t actually fix the problem of “not reflecting on your experience”, or help with even knowing whether you’re reflecting on your experience to begin with.
The First Test
But after many years of doing and teaching this process, I have noticed that there are certain patterns in the results of reflecting on one’s inner experience in response to the question “is that true?” Whenever I or my clients do the process correctly, there are actually three possible answers, not just two, when you take into account how you feel.
If you are correctly reflecting on “is that true?”, the experience of your answer will be similar to one of these three descriptions:
[feeling of lightness, release, relief] “Huh… I guess that’s not true.” (e.g. “most people should like me… huh, yeah, no, I guess there’s not any reason for that to be true”)
[feeling of heaviness, oppression] “I know it doesn’t make any sense, but it still feels true” (e.g. “I’m bad for not doing my work… I don’t want to be, but it feels like that’s just how it is.”)
[feeling of longing or regret] “I wish it were” (e.g. “I wish my father loved me more, but I guess it didn’t actually happen”)
Without understanding this sorting, people often confuse the experience of wishing it were true and it actually being true. So they answer “yes, it’s true” to the question, because the feeling of wishing it true, is rather similar to the feeling that something is true.
However, when compared to the heavy feeling of “I hate that it’s true”, the sad feeling of “I wish it were true” is a bit different, and once identified, can be handled much more easily. (It’s fairly simple, after all, to take the admission that you wish something were true, and from there, further admit that this means it’s actually not.)
Or, if you are feeling like it’s a bad-but-true thing oppressing you, then you are at least making progress of a different kind. You now know that you have an implicit belief or emotional schema you don’t endorse; that you simply learned at some point that this thing was a moral standard of your tribe. (And knowing this, you can shift to a process more suited for eliciting and correcting such beliefs.)
Or, you can also use The Work’s question 2: “Can I absolutely know that it’s true?” This question can help to loosen the sense of “rightness”, inviting you to consider how you could possibly know with 100% certainty the actual truth of an “ought”, rather than an “is”, and whether you could make that distinction in practice. (To use a legal analogy, it’s a bit like asking if there’s any conceivable doubt.)
The Second Test
For brevity’s sake, we’ll skip a detailed treatment of the Work’s other questions, jumping straight to the outcome of question 4: “Who would you be without that thought?” That is, what is your inwardly-reflected, simulated experience of how you would behave, if you weren’t thinking your “should”?
In my experience, the most common failure mode people have for this question is what I call “happy-ever-aftering”. Instead of allowing their mind to automatically generate a simulation based on the what-if, they try to specifically and deliberately envision themselves being a better person...
And then fail to notice their feeling that something is wrong!
Because about as often as not, the real, experiential answer to “Who would you be if you weren’t thinking X?”, is actually an objection, reservation, or other form of argument from your brain.
The thing you imagine doesn’t feel real or realistic. Or worse, you feel like you would be a bad person in some way, if you stopped thinking or believing the moral judgment. Or perhaps some bad consequence would happen, like maybe everybody would stop caring about their work and then nobody would have any coffee and civilization would fall apart.
These reservations can be subtle, but ignoring them will make the process fail. You may briefly feel better, having imagined a different “better world” than before, but will soon be disappointed because the oppressive “should” will return, as strong as ever. (Or perhaps be replaced by the idea that you “should” be the better person you imagined at this step!)
But what a reservation or objection simply means, is that your brain has another “should” in effect.
For example, at one point when I began this process, I felt that “I should be doing something” when I was trying to go to sleep. When I got to the part about “who I would be without that thought”, I realized that I would feel worse, like a “bad person”. Further inquiry showed that this was because I believed that not worrying about doing things meant I “wasn’t taking things seriously enough” – a new level of moral judgment to question the truth of.
If we think of our moral judgments as a belief network, where some beliefs are central (“you should take things seriously – i.e., worry about them”) and others less so (“you should be doing something right now”), then most of the time, we are only aware of the non-central ones. In our day to day lives, for example, we may often think things like, “I should have done this by now”, but only rarely do we explicitly think things like, “I’m a bad person if I’m not working.”
So when we begin the Work of eliminating these harmful judgments, we will nearly always be starting somewhere shallow. Thus, the real value of doing the process isn’t that it will fix the first thought we work on (e.g. “I should be doing something”), but that it will lead us to the deeper thoughts (e.g. “I’m a bad person”), through the objections or reservations we have about changing the first, shallower thoughts.
Then, once we are aware of those deeper beliefs, we can take steps in turn to change those. And finally, once they’re no longer supported by these central “strategic” beliefs (I’m bad/not serious/etc.), the everyday, “tactical” beliefs (I should be doing something) tend to fall away on their own.
And then we can actually think about solving our real problems, instead of merely punishing ourselves for not having succeeded yet.
The Conclusion
The Curse of the Counterfactual is a side-effect of the way our brains process is-ought distinctions. It causes our brains to compare our past, present, and future to various counterfactual imaginings, and then blame and punish ourselves for the difference between reality, and whatever we just made up to replace it.
Seen from the outside, this process manifests itself as stress, anxiety, procrastination, perfectionism, creative blocks, loss of motivation, inability to let go of the past, constant starting and stopping on one goal or frequent switching between goals, low self-esteem and many other things. From the inside, however, these counterfactuals can feel more real to us than reality itself, which can make it difficult to even notice it’s happening, let alone being able to stop it.
To counteract and fix this tendency, we can use various techniques (such as the litanies of Gendlin and Tarski, and the Work of Byron Katie). But doing so is inherently effortful, in a way that cannot be bypassed by mere understanding. There are, however, skills we can learn that make it easier, and tests we can apply to our inner experience that can help us know if we’re making progress or not.
There is no permanent or universal cure for the Curse, but reflecting on our experience in the right ways can release us from individual, specific cases of it. And applied closer to the root or center of our belief networks, it can even produce broader, more dramatic shifts in our behavior and what we think of ourselves.
But that’s a topic for another article, as this post is now almost as long as a short ebook!
The Addendum
Speaking of short ebooks, if you’re interested in other bugs in the brain that switch off our problem-solving and creativity subsystems, you may want to grab a free copy of A Minute To Unlimit You, as I am currently soliciting feedback on it.
The specific kind of “stuck” it deals with is the kind where you are under pressure to do something, but all you can think about is why you can’t do it, what’s stopping you, how you don’t know what to do or can’t decide, etc., instead of anything actually helpful.
So if you have a problem like that, I’d appreciate your (emailed) feedback on the content. (Are the instructions clear? Were you able to apply the technique? What happened afterwards? Just hit “reply” on the receipt email after your download to answer.) Thanks!
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- On the construction of the self by 29 May 2020 13:04 UTC; 71 points) (
- Beliefs as emotional strategies by 9 Apr 2021 14:28 UTC; 70 points) (
- Non-Coercive Perfectionism by 26 Jan 2021 16:53 UTC; 25 points) (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- 26 Jan 2021 3:48 UTC; 15 points) 's comment on Notes on Forgiveness by (
- 30 Dec 2020 3:42 UTC; 15 points) 's comment on Review Voting Thread by (
- EA in the Moment: Addressing Burnout with Identity by 26 Dec 2020 8:35 UTC; 10 points) (EA Forum;
- 31 May 2022 13:18 UTC; 4 points) 's comment on Deliberate Grieving by (
- 26 Mar 2024 20:37 UTC; 3 points) 's comment on General Thoughts on Secular Solstice by (
- 7 Aug 2022 12:11 UTC; 1 point) 's comment on On Akrasia, Habits and Reward Maximization by (
- 17 Aug 2020 14:02 UTC; 1 point) 's comment on Partially Enlightened AMA by (
- Partially Enlightened AMA by 16 Aug 2020 9:36 UTC; -10 points) (
I’ve come across a lot of discussion recently about self-coercion, self-judgment, procrastination, shoulds, etc. Having just read it, I think this post is unusually good at offering a general framework applicable to many of these issues (i.e., that of the “moral brain” taking over). It’s also peppered with a lot of nice insights, such as why feeling guilty about procrastination is in fact moral licensing that enables procrastination.
While there are many parts of the posts that I quibble with (such as the idea of the “moral brain” as an invariant specialized module), this post is a great standalone introduction and explanation of a framework that I think is useful and important.
I’m curious what the objection to the “moral brain” term is. As used in this article, it’s mainly shorthand for a complex interaction of social learning, biases, specialized emotions, and prospect theory’s notion of a baseline expectation of what one “ought” to have or be able to get in a specific circumstance or in exchange for a specific cost. (Or conversely what some specific thing “ought” to cost.)
This statement for example:
> Motivating you to punish things is what that part of your brain does, after all; it’s not like it can go get another job!
I’m coming more from a predictive processing / bootstrap learning / constructed emotion paradigm in which your brain is very flexible about building high-level modules like moral judgment and punishment. The complex “moral brain” that you described is not etched into our hardware and it’s not universal, it’s learned. This means it can work quite differently or be absent in some people, and in others it can be deconstructed or redirected — “getting another job” as you’d say.
I agree that in practice lamenting the existence of your moral brain is a lot less useful than dissolving self-judgment case-by-case. But I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.
P.S.
Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors. I recently told a woman about EA and she was outraged about young people taking it on themselves to save lives in Africa when billionaires and corporations exist who aren’t helping. It was a clear demonstration of how different people’s moral brains are.
Note that this is not evidence in favor of being able to unlearn judginess, unless you’re claiming you were previously at the opposite end of the spectrum, and then unlearned it somehow. If so, then I would love to know what you did, because it would be 100% awesome and I could do with being a lot less judgy myself, and would love a way to not have to pick off judgmental beliefs one at a time.
If you have something better than such one-off alterations, and it can be taught and used by persons other than yourself, in a practical timeframe, then such a thing would be commercially quite valuable.
I am aware of many self-help approaches for eliminating specific judgments. However, apart from long-term meditation, or a sudden enlightenment/brain tumor/stroke, I am not aware of any methods for globally “unlearning” the capacity for judginess. If you know how to do such a thing, please publish! You will be revolutionizing the field.
Define “it”. ;-)
I think perhaps we’re talking past each other here, since I don’t see a “complex” moral brain, only several very simple things working together, in a possibly complex way. (Many of these things are also components shared by other functions, such as our purity-contamination system, or the “expected return calculation” system described by prospect theory and observed in various human and animal experiments.)
For example, we have emotions that bias us towards punishing things, but we can certainly learn when to feel that way. You can learn not to punish things, but this won’t remove the hardware support for the ability to feel that emotion. Both you and the woman you mentioned are capable of feeling outrage, even though you’ve learned different things to be outraged about. That animals raised in captivity, and pre-verbal human children can both be observed expressing outrage over perceived unfair treatment or reduced rewards without first needing an example to learn from is highly suggestive here as well.
I think it’s safe to say that these low-level elements—such as the existence of an emotions like moral outrage and moral disgust—are sufficiently universal as to imply hardware backing, despite the fact that the specific things that induce those emotions are culturally learned. AFAIK, they have universal facial expressions as found in even the most remote of tribes, which is strong evidence for hardware support for these emotions. (There are also established inbuilt biases for various types of moral learning, such as associations to purity, contamination, etc. -- see e.g. the writings of Haidt on this.)
Can you learn to route around these emotions or prevent them arising in the first place, to the point that it might seem you’re “unlearning” them? Well, I imagine that if you meditated long enough, you might be able to, as some people who meditate a lot become pretty nonjudgmental. But I don’t think that’s “unlearning” judgmental emotions, so much as creating pathways to inhibit one’s response to the emotion. The meditator still notices the emotion arising, but then refrains from responding to it.
That people can meditate for years and still not achieve such a state also seems to me like strong evidence for judgmental emotions as being the function of a piece of hardware that can’t just be turned off, only starved of stimulation or routed around. The literature around meditation likewise suggests that people have been trying for thousands of years to turn off attachment and judgment, with only limited success. If it were purely a software problem, I rather expect humanity would have figured something out by now.
Nominated for similar reasons as the ones in my curation notice. I think this was the most long-term useful LW post that I read in 2019.