Beliefs as emotional strategies
In “Crony Beliefs”, Kevin Simler suggests the analogy of beliefs as employees in a company, who may be hired to carry out different tasks. “Meritorious” ones, he suggests, are in the business of accurately modeling the world, whereas “crony” employees are there just to win favor from others.
I like the analogy (“employees within a brain” sounds a lot like “subagents”) and the idea of beliefs that have been “hired” for the purpose of currying favor from others, but I think the essay could go further still.
I think that many beliefs that do not appear at all crony are nonetheless rooted in complicated social strategies and that one’s thinking can drastically change by altering the assumptions behind those strategies. The strategies in question can be much subtler than just “believing what others in your current social group believe”. Rather, such socially motivated beliefs can include deeply-held lifelong conceptions of metaphysical topics such as the nature of free will, morality, and talent.
Let’s look at this in the light of three real-world case studies.
Believing in free will to earn merit
(The person in question reviewed this section before publication and let me know which parts were fine to share.)
Some time ago I happened to talk with someone who mentioned that they had a strong need to believe in non-deterministic free will. They knew that this made no intellectual sense, but they still needed to maintain this as a compartmentalized belief. At a time when they were younger, they’d stopped believing in free will and had gotten into a suicidal depression. The only way they got through was by coming to believe in free will again.
They explained that they didn’t know why they needed to believe in it, and figured that their motivational system was just intrinsically wired up that way. It felt like a hardwired, unquestionable innate need.
In response, I said that in my experience, these kinds of things are learned emotional strategies of some sort. They were skeptical of that claim… but thought about it for a while anyway. After a while, they told me that they had been raised to believe that only things you got with hard work mattered. Most of their life, they had specifically chosen to do hard things because doing easy things had made them feel unbearably empty.
The way they thought about it, for something to be hard work, it has to be your own choice. And something can only be your own choice if it’s not predetermined, i.e. if non-deterministic free will exists.
As an example of where that came from, they mentioned that their parents had never praised them for As that had come easily for them, but had praised them for Cs that had required a lot of hard work.
Since they seemed to be open to exploring this, I suggested a juxtaposition experience for them. What would happen if they tried recalling an incident where they had brought such a report card home, getting praise for the hard-earned Cs but not the effortlessly acquired As… and then imagined an alternate scene where they brought the same report card home, but had their parents just be happy with them regardless of what their grades happened to be and regardless of how they were earned?
As a result of doing that, they said that they had started crying, and they felt like the need to do everything the hard way had vanished. My model of what happened was that their emotional brain had noticed their parents only valuing hard work, and generalized that to “hard work is the only thing that anyone ever values”. Once they noticed that they could coherently imagine having parents who did not make such a valuation, the generalization—and the resulting need to believe in free will—dissolved.
Sometime later I asked them how they currently felt about free will. They replied:
ahw, thank you for asking 😃 I actually had quite some thoughts about this since we last spoke, but they were all rather disjointed and didn’t want to flood you
I didn’t really realize in advance how much of a sort of “lynch pin” belief the free will actually is in my mind. It makes sense looking back on it now, cause it was so resistant to logic that I perfected a whole separate cognitive structure to protect it without giving up logic
and I mean lynch pin as referring to this sense that “well if I let this thing go, then this whole flood of other things follow”
So most of my thoughts have been around the repercussions of letting that feeling of “free will has to be a thing” go, and it recasts a pretty large part of my adolescences
like … I think the most fundamental part I can identify at the moment is this: If free will is not a thing on any level, then there are no merit points that can be earned by exerting it. Exerting free will was often a matter of fighting against one’s natural urges. Basically about not picking the “easy path”. The harder the thing you do, the more fighting is involved, the more sacrifice—the more merit points you got for doing the free will thing, because it shows you’re strong. But then if you scratch that … 1) how do you gain any merit points anymore? And if you don’t gain merit points, then how do you have value or does your life have meaning? and 2) If there are no merit points to gain by doing the hard thing, then you might as well do the thing you WANT … the thing you’re naturally attracted to and feels right, and you don’t have to feel guilty about that
I think point 1 would explain why I could never give up the free will feeling. I had nothing to substitute it with, and back when I formed the belief it was part of my defense against suicidal depression. Which made me think that maybe the timing of this conversation between us is also fortuitous cause for my brain, having kids is so meaningful, I don’t really need anything else to make myself feel fulfilled (I mean, more is a plus, but entirely not necessary).
But also think point 2 might explain why I got depressed in the first place … like if you internalize that you only have value, and your life only has meaning, if you pick the path that inherently makes you unhappy (cause it’s the hard path and not what you really want) then of course that’s going to make you depressed, and neurotic, and anxious
Moral obligation as defense against arbitrariness
I had long felt varying degrees of dissatisfaction with various aspects of my work, but kept getting stuck doing the things I knew how to do well enough to get paid for. During 2020 I made a deal with myself to only continue with my current thing until the end of the year and then quit, so that I would be forced to find something that felt more satisfying.
Separately, I had also intended many times to take a break from all the effective altruism, save-the-world kind of work in order to just focus on myself and my own needs for a while, but kept getting pulled into one do-gooding thing after another. This would finally be an opportunity to just take that break, spend a few years just doing something that felt good for its own sake, without worrying about what kind of an impact (if any) it would have on the world.
Then at one point, as I was considering various things that I could be doing, I noticed that there were some options that most of my mind was on board with… but there was still a lingering sense of an “effective altruist guilt”, some part of me saying that just doing something that’s personally meaningful rather than globally impactful is selfish—that even if I did something that was valuable for some people, if it wasn’t valuable for the most people, it was wrong.
So I had a belief saying something like “if you have the option of doing something that helps a lot of people and an option that only helps a few people (if any), then it is morally wrong to take the option that helps fewer people”. In late February, I started investigating that belief, and found that it was associated with a felt sense of having wanted/needed something when I was little, but being told that I couldn’t have it.
And that felt sense was associated with a feeling that the only reason why I couldn’t have the thing(s) was some senseless social custom that overruled my needs—but that I also couldn’t present any case for myself, because the custom felt so arbitrary and senseless that there was nothing to argue with. It wasn’t clear exactly what experience it was drawing on—to some extent it felt like the amalgamation of many similar experiences—but it did feel connected to vaguely recalled memories of being told that I couldn’t do or have something because I was too old for it now, with no other rationale being offered.
As a result, some part of my mind had revolted against the notion of arbitrary social customs overruling people’s needs. It was saying something like “people’s needs actually matter, you’ve got to help them if they are suffering, you are not allowed to invoke social custom as a reason not to”. Maybe it thought that if the people around me had followed this rule, I wouldn’t have been treated in a way that from a child’s perspective felt senseless and arbitrary. Maybe if I could make the people around me believe in that, then I would have my needs better taken into account in the future.
That rule, in turn, led very naturally to “I personally have to help people if they are suffering, I am not allowed to invoke ordinary social norms about the whole world’s suffering not being my personal responsibility”.
Having found this child part of me, I imagined an experience of him being given love and being understood, and him getting to keep doing whatever thing he wanted to keep doing until he naturally grew out of it (or alternatively he could just continue doing those things forever, if there genuinely wasn’t any reason to stop doing them).
Doing that seems to have reconsolidated the emotional belief of “my needs won’t be understood unless people are coerced into caring about suffering”, which also dissolved the belief of “I have to be coerced into helping whoever is suffering”, and since then I haven’t had any “effective altruist guilt” associated with thinking about my life choices.
Lack of talent and people’s selfishness
In both of the previous examples, the belief in question was relatively simple. Sometimes there’s a more complicated web of interacting beliefs and behaviors.
My friend Sampo Tiensuu does math and physics tutoring as well as mental coaching to people trying to get into med school. He has told me of a pattern of beliefs that he has found with some of his clients.
A client has a tendency to easily give up whenever they face difficulties, believing any obstacles to be a sign that they are insufficiently talented. Poking at the memories and associations behind an underlying belief of “if I encounter difficulties, that means I’m insufficiently talented to deal with them on my own” eventually turns up the following:
The client’s mother has had the belief that people are selfish at heart and only interested in exploiting others. The mother assumes that if she punishes others for their selfish behavior, then the fear of punishment will cause them to act more altruistically. This is linked to a belief that if people were fundamentally good, then there would be no major problems in the world; but because there are major problems in the world, then someone must be responsible for them and needs to be punished.
Now if the mother’s child gets bad grades in school, this has to also be someone’s fault. It could be the child’s fault for not having worked hard enough, but if the child didn’t work hard enough, then that would be the mother’s fault for not having raised her child right. This, in the mother’s belief system, would imply that she was selfish and deserving of punishment.
To avoid feeling guilty, the mother has to make her child do better in school. One possibility would be to arrange remedial teaching for her child. But if her child got extra lessons that other children didn’t, then this would mean that she was getting her child an unfair advantage. This would also make her feel selfish.
The mother avoids this line of thought by deciding that her child’s poor performance is the school’s fault, and that it’s the school’s responsibility to help her child catch up. To make the idea of remedial teaching compatible with her conception of fairness, she needs to believe that there’s something wrong with her child, such as a fundamental lack of talent. If her child is fundamentally untalented, then their poor grades are not the mother’s fault, but they are the school’s fault for not taking her child’s learning problems seriously enough. The mother can now make helping her child feel justifiable to herself, while also feeling like a good person because she’s punishing the school and the teachers, who she has shown to be selfish and thus in need of punishment.
The mother’s belief that her child’s poor grades reflect a lack of talent is then absorbed by the child, who ends up becoming demotivated whenever they end up having difficulties with their studies. Eventually they end up in a coaching session with Sampo, who helps them bring into awareness the experiences during which their mother communicated this belief to them. After being successfully guided to imagine a mother who didn’t feel a need to attribute the client’s poor grades to a lack of talent, the belief reconsolidates and dissolves, and the client no longer experiences difficulties as intrinsically demotivating.
A unified view of belief
In Simler’s original essay, he considers crony beliefs to be a different kind than ordinary “merit” beliefs. However, I think there exists a frame in which these are not two entirely distinct categories: rather both are special cases of a more general category of “belief”. I’ll say more about this in later posts.
Puzzle: how are both merit beliefs and crony beliefs the same kind of thing?
I enjoyed this write-up.
Buuut… I have to ask: how do we know your emotional stories are ′ correct’ and not just-so-stories?
Sure I can imagine that the whole story with the mother and the child is factually correct, yet is it really the reason that the child is giving up prematurely?
Maybe just maybe the child simply doesn’t like doing hard things, doesn’t like math, or is genuinely untalented or maybe there is completely different crazy complex story about the relations with their relatives, some deep early childhood trauma.
From personal and other people’s experiences I agree that hearing the ‘right emotional story’ can be satisfying and give a sense of relief. Sometimes it will even lead me/other people to make short-term change. But that’s different from it making any long-term changes (like changing one’s work ethic), and even that doesn’t mean it is true [just a helpful delusion!].
Of course, we can’t know for sure. It could be that the interventions actually worked by a different method than they seemed to.
But consider e.g. the first story. Here was a person who started out entirely convinced that the belief in free will was an intrinsically hardwired need that they had. It had had a significant impact on their entire life, to the point of making them suicidally depressed when they couldn’t believe it. I had a theory of how the mind works which made a different prediction, and I only needed to briefly suggest it for them to surface compatible evidence without me needing to make any more leading comments. After that, I only needed to suggest a single intervention which my model predicted would cause a change, and it did, causing a long-term and profound change in the other person.
Because I do expect it to be a permanent change rather than just a short-term effect. Of course, the first two examples are both from this year—I didn’t ask Sampo when exactly his example happened—so in principle it’s still possible that these will reverse themselves. But that’s not my general experience with these things—rather, these interventions tend to produce permanent and lasting change. The longest-term effect I have personal data for is from June 2017; this follow-up from December 2018 still remains a good summary of what that intervention ended up fixing in the long term. (As noted in that follow-up, it’s still possible for some issues to come back in a subtler form, or for some of the issues to also have other causes; but that’s distinct from the original issue coming back in its original strength.)
So it’s possible that my model is mistaken about the exact causality—but that by treating the model as if it was true, you’re still able to cause lasting and deep changes in people’s psychology. If my model is wrong, then we need another model that would explain the same observations. Currently I think that the kinds of models that I’ve outlined would explain those observations pretty well while being theoretically plausible, but I’m certainly open to alternative ones.
I don’t think that e.g. just “hearing the right emotional story can produce relief” is a very good alternative theory. I’ve certainly also had experience of superficial emotional stories that sounded compelling for a little while and whose effect then faded out, but over time I’ve learned that a heuristic of “do these effects last for longer than a month” is pretty good for telling those apart from the ones that have a real effect. The permanent ones may also have an effect on things you didn’t even realize were related beforehand—e.g. the person in the first example analyzing the things that they realized about it in retrospect—whereas in my experience, the short-term ones mostly just include effects that are obviously and directly derivable from the story.
So some compelling stories seem to produce relatively minor short-term effects while other interventions cause much broader and longer-lasting ones, and just the hypothesis of “emotional stories can be compelling” doesn’t explain why some emotional stories work better than others. Nor would it have predicted that suggesting the specific intervention that I offered would have been particularly useful.
All of that said, I do admit that the third story has more interacting pieces and that the overall evidence for that one is weaker. We can only be relatively sure that telling the client to imagine a different kind of mother was the final piece in resolving the issue; it’s possible that the other inferences about the mother’s beliefs are incorrect. I still wanted to include it, in the spirit of learning soft skills, because I think that many beliefs that affect our behavior aren’t nice and clear-cut ones where you can just isolate a single key belief and be relatively sure of what happened because you can observe the immediate effects. Rather there’s much more behavior that’s embedded in an interacting web of beliefs like I outlined there. Even if the details of that particular story were off, enough of it resonates in my inner simulator that I’m pretty sure that something like that story could be true and often is true. But for that one I can’t offer a more convincing argument than “load it up in your own inner sim and see whether it resonates”.
I’ve been reading “Feeling Great” by David Burns and it seems to have very similar ideas.
His thing is, when the patient has negative thoughts & feelings and says they wish they didn’t, he says “Hang on are you sure you want to get rid of those? Let’s think about all the ways that those negative thoughts & feelings are helping you, and let’s think about all the ways that those negative thoughts & feelings exemplify awesome aspects of your personality.”
Like the thought “I’m a hopeless case” is helpful because you don’t have to keep working hard to get better, and you don’t have to feel at fault for still having problems. And having that belief shows that you’re a realistic and observant person. Etc. etc.
And they don’t even start trying to get rid of the negative thought until they’ve talked about this for a while and the patient is satisfied that they have a path to keeping those positive aspects, or that they’re really OK giving up on those positive aspects.
It rings true to me and has seemed to be helpful so far, and it definitely seems related to what you’re saying here. And the book makes that process really straightforward and step-by-step. :-)
A ‘merit belief’ is out to model the world, but not in an unmotivated way. It wants to model a part of the world that’s relevant to someone else; you can tell it’s accurately modeling an important part of the world because it wins favor from others (by being useful to them).
See johnswentworth’s posts on price signals in biology and so on.