I have a weird belief, it’s both controversial and obscure:
Motivated Cognition (MC) has a serious chance to be a good epistemology.
Even if MC is wrong, it’s very useful for goals which are not about predicting reality. And MC is a path to ideas which don’t require MC, but which are hard to introduce without it.
Properly knowing the idea of MC and knowing how to apply it makes any person ~2 times smarter, more perceptive and understanding.
There is an unexpected duality between Bayesian reasoning and MC. I’m not confident in this part of my belief, but I think this should be true at least to some degree.
I want to have a rational and semi-rational discussion about MC. “Semi-rational” meaning you’re ready to suspend some judgement to fully appreciate the idea. If this idea is true it’s as important as the Scientific method or the Bayes’ theorem, so the stakes are very high. The post contains all cruxes of my belief, so I’m ready for a double-crux. Sorry if I write too arrogantly in a couple of places in the post, I just want to emphasize my point.
I don’t want to “shock” people with my belief. I know, it doesn’t sound nice, but it’s original and doesn’t hurt to entertain. And I realize that it also sounds too ambitious: what’s that part about “2 times smarter”, how can I know that and what does that even mean? I hope my post will be able to clarify this.
I believe that at the very least analyzing MC could help us understand people who criticize and disagree with rationalist community. I think this is by itself is important enough.
Intro: what is Motivated Cognition?
Two types of MC
Let’s differentiate two types of motivated cognition:
Local wishful thinking is about random wishes that don’t take into account facts. “I want to fly, so I jump off of a cliff towards injury and death.”
Global motivated cognition (MC) is about your most important wish + facts.
Logical thinking is about “logical inferences + facts”, MC is about “wishes + facts”.
You can’t use logic without all facts and all inferences. The same way you shouldn’t use MC without all facts and all wishes.
How can MC possibly work?
Note that typical criticisms of MC may be unfair:
Reality doesn’t care about your wishes.
Reality doesn’t care about any thinking method.
You can’t know about reality without investigating it.
Wishes can be caused by investigating reality. Also, your “wishing mechanism” is an unexplored part of reality.
Reasons why MC can work without magic:
Maybe it just happens to work in our world, by a weird accident. MC could still count as empiricism and have objective truth. It wouldn’t be too crazy.
Maybe MC is equivalent to logical reasoning. Maybe MC is a complicated consequence of logical rules + basic facts about our world.
Maybe you can “exploit” human psychology, i.e. learn a bunch of true facts about the world by knowing some psychological facts and filtering people’s opinions.
In this post I give informal arguments for why 1 or 2 or 3 may be the case.
Example: a MC-based belief
Here’s an example of a MC-based belief:
I believe that in some important way (most) people have equal intelligence.
This belief is based on fundamental enough wish: a world where certain people can’t exist on certain levels of intelligence would be very weird and sad.
Components of the belief:
On one hand, it’s simply a description of what I’m interested in: I’m interested in the core of intelligence which is shared among all people. It’s a bit like a tautology, as if I’m defining general intelligence as something which everyone shares.
On the other hand, it’s truly a belief, something that eventually leads to different expectations. And it can be refuted by reality (e.g. if certain individuals have a lot of “general intelligence” skills absent in other people).
I’m agnostic if my belief can or can’t be expressed in Bayesian epistemology: maybe I just have different priors and different reference classes than people who disagree with me. I don’t really know if my belief is MC-based or not.
I can imagine a possible world where people are equally smart/their intelligence is equally effective. A world which rewards people more often and more diversely. Something like “no free lunch theorem” world, where there’s a lot of puzzles for different types of people. Such world is deeply important to me, so it feels as if just it’s logical possibility should matter.
Note that the same reasoning doesn’t apply to the belief “God exists” for a couple of reasons: (1) it’s a weird belief, because God is supposed to exist “beyond” our world (2) we have too great evidence of absence for God (3) you can argue that existence of God is not a fundamental wish, wishes about people should come first.
Part 1: Motivated Cognition is interesting
This part of the post explains why Motivated Cognition is interesting to study. Even if it’s wrong.
Ethics
Ethics is an example of solid MC.
For example, we don’t kill each other because we don’t want to.
But that doesn’t mean it’s an arbitrary wish and we can just start wanting otherwise.
Politics
Many people link politics to MC.
But people do this in caricature ways, without actually analyzing MC in general.
Human biases
MC is considered to be one of the human biases, obviously.
And yet people don’t actually think through what MC is, how it could work and what it implies.
MC gets linked to religion, this is obvious too. The link brings up four types of questions: who would actually want to believe in [religion X], is it convenient? what does a believer allowed to want as the best possible thing [e.g. “paradise”]? what can God want? what are reasons to not want God to exist?
I don’t know how much of it was explored, given that people don’t treat MC seriously.
Philosophy
Expected utility (Pascal Wager)
The idea of expected utility can sometimes give justification to MC.
I’m confused why this topic barely went beyond religion.
Anthropics
Anthropic reasoning is sometimes related to MC in a roundabout and very weak way.
Example 1. If you want to live in a paradise, make sure that you can be born only in a paradise.
Example 2. If parallel worlds exist, then there are versions of you that are infinitely lucky. And versions that are infinitely unlucky (agonizing minds that are barely alive).
Induction
Universal prior may be affected by beings in different universes who pursue their own goals. This is a very strange example of “motivated” cognition.
It is related to MC, even if it’s not meant to be.
Modest epistemology
You could say that rejecting modest epistemology is an epistemological decision based on MC. You agree to the risk of being wrong in order to use your brain to the fullest.
In general, you can argue that logical reasoning just can’t handle all the (meta-)epistemological problems. Parallel universes, infinities, mugging, modesty… at some point you just have to say “I want to live a certain life. So I’m going to assume I can live it. I’m not going to keep entertaining the possibility that I’m a Boltzman brain or that a random mugger is a Matrix Lord. Even if I have to violate Bayesian updating.”
Wishes are not trivial pt. 1
Rational community knows that desire for immortality or “infinite pleasure” are not trivial topics.
Stories about evil genies and monkey paws teach us that wishes are not trivial.
Ethical paradoxes teach us that wishes are not trivial. Orthogonality Thesis (AI) is about wishes and it contradicts our intuition.
So, at one hand we laugh at MC, but on the other hand we are deeply confused about wishes because they are as complicated as rocket science. In my opinion it’s almost doublethink.
Wishes are not trivial pt. 2
Reality teaches us that wishes are not trivial.
A lot of people die tragically because the bad guys simply fail to want good things for themselves.
There was a time the bad guys were just kids who wanted good things… but at some point the “wishing mechanism” got broken. And now innocent people die because the bad guys pursue some abstractions such as “infinite wealth”, “infinite power” or “infinitely conservative values”. People turn into paperclip maximizers.
I don’t understand how people can laugh at MC while knowing this. Even simply wishing something actually good for yourself is not trivial.
Conclusion
So, if you study ethics, politics, legal systems, physics, religion, human biases or philosophy it would be rational for you to be interested in Motivated Cognition in general.
MC is also related to skills. When you learn to move your body you learn to balance wishes and facts. The same thing when you learn to make and execute plans. When you play chess you often have to play expecting the opponent to make a mistake, i.e. expecting an illogical thing to happen: fake it till you make it.
Part 2: MC and communication
This part is about the way Motivated Cognition could help us in communication. Even if it’s wrong. MC helps me immensely and I just can’t imagine living without it.
MC helps me understand and remember opinions
My goal here is not to accuse people of wishful thinking. I’m just saying I use MC to remember and model opinions of others.
Roger Penrose
Roger Penrose has a pretty complicated theory (Orchestrated objective reduction). It’s about consciousness, but also about purely mathematical topics and about two physical theories (quantum mechanics + general relativity). I would have a hard time remembering the theory. But I know an easy trick to remember it:
Imagine the best possibility (for humans) consistent with today’s physics. Imagine the best (for humans) mathematical facts.
You automatically get Penrose’s theory.
Eliezer Yudkowsky
Eliezer Yudkowsky has a pretty complicated opinion about ethics (Morality as Fixed Computation). It’s about algorithms, but also about probability, math and epistemology. People who know much more about math and algorithms than me struggle to understand Eliezer’s idea. However, I know an easy trick to make everything clear:
Imagine the best (for humans) version of ethics consistent with Eliezer’s simpler beliefs. Related to Eliezer’s interests.
Imagine the best version of ethics toned down by Eliezer’s “pessimism”.
You get something like “ethical statements are metaphysical (algorithms), they have the most interesting epistemological properties of everything (math, probability and counterfactual worlds)”. Now I can recite Eliezer’s opinion even in my sleep.
MC and philosophies
Immanuel Kant
Can you easily remember Immanuel Kant’s philosophy in enough detail? I bet many of you can’t, especially if you disagree with Kant and don’t like philosophy in general. I mean, his philosophy is massive, we have A LOT of ground to cover. However, MC helps me to simplify things:
Our perception is more important than the absolute truth. This is a logical necessity. (That’s interesting and good for us.)
Morality combines choice and obligation. (That’s the most interesting possibility.)
Moral law is unconditional. (That’s convenient.)
Morality = logic. Being a bad guy is simply inconsistent. (That’s convenient.)
There is a mix between a priori and a posteriori knowledge. (That’s the most interesting possibility.)
Moral law comes from not treating other people as tools. (That’s very simple and beautiful.)
That’s it, we covered most of it. Simply being smart and being an optimist gives you a very good approximation of Kant’s philosophy.
Gottfried Leibniz
Check this out: Monadology. Simple substances, fractals, “causality doesn’t exist” and God. Do you understand everything, does everything click together? I may be mistaken, but this is obscure even in the field of philosophy. However, we have MC to simplify everything:
Monism (“everything is a single thing”) is convenient. As is idealism.
Fractals are convenient (because they are the same on all levels).
Causality is not convenient, it’s complicated to keep track of and boring. “Harmony” is a more interesting idea.
God is the sum of all important things. That’s simple and beautiful.
That’s it, we covered everything. Again, being smart and being an optimist (and not being afraid to study esoteric ideas) gives you a good approximation. MC allows you to crunch different philosophies in seconds.
Moreover, using MC you can easily merge Kant’s and Leibniz’s philosophies. Which sadly hasn’t been done and it’s a shame. If it had been done, I probably wouldn’t even need to explain and defend MC.
Classification
So, MC helps to understand, simplify and classify opinions. Not bad.
MC also makes you curios: what is the point of convergence of different opinions? It definitely exists since we can evaluate at least some opinions as less or more optimistic. What opinion do you get when you don’t “tone down” MC by pessimism or specific interests?
The moral of the story is supposed to be something like “popular sources are misleading, precise math is the best”. (Mom: “For generations physicists had a custom when discussing these matters with outsiders they wanted to avoid being too… graphic. Too explicit. “Gulp” Mathematically precise.”)
However, the son’s mistaken interpretation… simply sounds less interesting than the truth. I wouldn’t want it to be true. If you give true bits and false bits, it’s easy to filter out the false bits using MC. Not necessarily from the first try: you can deal with conflicting information using MC.
So, MC even helps me remember Science. Because misconceptions are boring. And if a misconception is truly interesting… then it’s worth making.
Language cooperation and MC
Gricean maxims tell us that conversation is about cooperation. This fact lets us skip a lot of information in our speech and yet understand each other perfectly well. Because we have an agreement to tell each other relevant information and not hide anything important.
For me MC is a natural extension of Gricean maxims:
If we discuss (X), we should discuss the best and the most interesting version of (X).
If we think that the best version of (X) is impossible, we should at least mention it anyway.
If you don’t even mention the possibility of the best (X), it means you are not properly aware of it.
Living in a world where people are not familiar with MC sometimes does feel as crazy as living in a world without Gricean maxims. To the point that communication feels impossible.
Steelmanning, avoiding weak men and being charitable somewhat compensates the absence of MC, but most of the time people don’t apply those techniques to the fullest extent.
Part 3: MC and research
This part is about the way Motivated Cognition could help us in research. Even if it’s wrong.
The best solution
Let’s say you’re solving a problem. It may be useful to imagine a perfect solution, a perfect outcome of solving the problem. Even if it’s impossible.
However, to imagine the perfect solution to the problem A you may need to be aware of the perfect solutions to the problems B, C, D, E, F, G and H.
And if you never do MC in any way, shape or form, then you may be simply unable to do it.
Pervasive effects
I think that total neglect of Motivated Cognition has a very negative effect on society:
People avoid MC. People forget what “desirability” means.
Having forgotten about “desirability”, people stop desiring to find original theories. People forget what “originality” means.
Without MC people’s empathy towards opinions of each other suffers. People become less aware of each other’s ideas. I mean truly, deeply aware.
The absence of “desirability” and “originality” metrics and “empathy” negatively affects people’s general comprehension of theories and ideas.
So, the pipeline goes like this: “no desirability → no originality and empathy → bad comprehension”.
Originality is really dead at this point: “original ideas” is not a concept that exists in our society. People learn a couple of original ideas in physics, math, philosophy and fiction… and then kind of forget that they can expect to find more original ideas? Or that it’s a thing you can look for? Instead people try to estimate “truthfulness” directly and fail horribly.
Logical decision theory is arguably the most original idea in Rationality and one of the most original ideas in philosophy in general, but even LW-community doesn’t notice it.
Studying perfection
There’s one rule I think about: “if you don’t assume that X is perfect, you don’t study properties of X, you study something else” or “only perfect things can be studied”.
If you don’t assume that human reasoning is perfect, you don’t study human reasoning, you study math or evolutionary psychology. Something else that you do consider to be “perfect”.
If you don’t assume that humans are perfect, you don’t study humans, you study evolution on the planet Earth.
If you don’t assume that our Universe is perfect, you don’t study our Universe, you study Turing Machines. (e.g. Conway’s Game of Life)
If you can’t imagine that Motivated Cognition can be a perfect epistemology, then you don’t really get curios about Motivated Cognition.
So, if you really want to study X, your reasoning may seem like MC to other people. See also what I wrote about Gricean maxims. For other people MC is insanity, but for me MC is about basic rules of dealing with information.
Part 4: Epistemologies are broken
You may warm up towards Motivated Cognition if you notice that the intuition for “logical reasoning” may be coming from a completely wrong and confused place. A very big part of my belief in MC is based on my disbelief in what people label as “logical reasoning”.
Folk Epistemology
Imagine for a second that Bayesian Epistemology doesn’t exist. Do you notice something strange in the world?
I do: people don’t have any epistemology (null, zero). And yet every single person is dead sure that some mythical “logical reasoning” exists. Not a single person asks any questions.
People don’t understand the difference between formal and informal logics.
People don’t realize the difference between logic and epistemology.
People don’t study argumentation even though they think it’s a key skill. In particular, people don’t study legal reasoning (argumentation on the level of the law).
People don’t acknowledge that the field of “correct reasoning”/informal logic doesn’t exist.
Even in philosophy people barely question logic. Not in a way that would actually affect the field. So, we do have logical pluralism and logical nihilism, but even the article about them is filled with unquestioned argumentation...
Nobody asks “Wait, when did I learn logic? What is it? How can I expect to be any good at it?”
Folk’s belief in “logical reasoning” is far stranger than any religion. Religious people at least disclose the (supposed) sources of their beliefs and what it means to them. I’m deeply confused by folk’s belief in logic, I can’t come up even with an uncharitable explanation. You need zero self-awareness and zero debate experience in order to not question epistemology.
Is Bayesian Epistemology better?
I think Bayesianism is a brilliant formal epistemology. At least it actually exists.
However, in informal areas of human reasoning Bayesianism has a lot of problems. Rationalists may repeat and even “justify” folk’s mistakes:
Instead of looking at the problems a bayesian may double down on the fact that “Bayesian axioms are inescapable, they are pure logic”. Similar to how folks double down on “logic is logic, it can’t be wrong” without noticing that logic isn’t even an epistemology.
Just like formal logic, Bayesianism depends on the way you split reality into labels. Reference class problem.
Bayesianism doesn’t model your reasoning and informal argumentation.
A straw bayesian may reason like this: “Yes, I guess I don’t know how informal argumentation works. But who needs it when I have the correct epistemology? And anyway, if somebody forced me with a gun to make bets on arguments, I would make those bets. So, I guess I already know how everything works!”
Sequences (the foundational text) don’t actually analyze argumentation.
Even when bayesians discuss saving the world, they argue like everyone does.
Folks and philosophers often question their knowledge and starting assumptions, but rarely question their inferences. But in informal reasoning the process of inference is more complicated than the rules of formal logic. So, paradoxically, actual argumentation never gets questioned. The most interesting parts of arguments never get questioned. And I’m afraid that rationalists inherit this blind spot. It seems rationalist rarely ask “Wait, can I really apply lessons from Bayes’ theorem to informal reasoning?”
The main problem of epistemology
The main problem of informal epistemology is splitting the reality into labels. Without it you can’t apply formal logic or Bayes’ theorem. But there’s no rules of attaching labels to real things.
Motivated Cognition is the only idea I know of that has at least a chance to solve the problem of labels. Because pairs like (label, real thing) have different convenience factors. Which means we can differentiate and choose such pairs.
Part 5: MC has roots in models of argumentation
If you try to model actual human argumentation, you may see how it leads to Motivated Cognition.
I discovered the models below because of MC, not the other way around. However, they retroactively made MC self-evident. Sorry that I don’t go into details of each specific model right now (the post would be too big).
Model 0: arguments are relative
Imagine that arguments don’t have any intrinsic power, even intrinsic meaning. It’s not so hard to imagine. If this is true, you need some meta-thing to give arguments power and meaning.
Motivation can be this meta-thing.
Irony
The most ironic thing may be that “logical reasoning” itself is often best modeled using MC, not logic. Let’s illustrate that and also give an example of an argument “without intrinsic meaning”:
“But, the theologian shook his head sadly, and said that the atheist was naive about the emotional depth of the experience of ‘faith’, that it wasn’t a concept invented by culture, but a feeling built into all human beings. In proof of this, the theologian offered the analogy of someone who’s told that their lover has been unfaithful to them. If the evidence wasn’t conclusive—and if you really loved that person—then you might think of everything they meant to you, and everything that you had done together, and go on putting trust in them. To trust someone because you love them more than anything—we would even call this believing in your lover. This, the theologian said, was the emotional experience at the root of faith, not just a trick of argument to win a debate. That’s what an atheist wouldn’t understand, because they were treating the whole thing as a logical question, and missing out on the emotional side of everything, like Spock. Someone who has faith is trusting God just like you would trust the one you loved most.”
“I think he [the atheist] shook his head sadly, and commented on how wretched it was to invent an imaginary friend to have that relationship with, instead of a real human lover.”
Maybe the atheist’s argument makes a lot of sense. Maybe it doesn’t make any sense. Why posit a (false) dilemma? It’s probable that the atheist’s argument is best modeled using MC, not logic:
“You should focus on your single most important wish. So, wish to love the very real people near you, not abstract otherworldly entities.”
What do you do when MC emulates logical reasoning better than logic? It covers the hole in the argument which would take a lot of debates to cover. (Related: Warrant and Tortoise vs. Achilles.) And it would save a lot of time for the theologian, too.
A more radical claim: maybe “logical arguments” simply don’t exist in informal argumentation, only MC-based ones do. Without MC you can’t make sense of any informal debate.
Model 1: choosing concepts
Model 1. Any concept such as “intelligence” has an infinity of versions. In order to reason about the world you need to choose what versions are the most important. You choose based on your previous choices (interests).
This model easily leads to MC.
Model 2: looking for information
Model 2. Instead of theories about the world you can analyze “statements” about the world. A statement can contain a lot of true information even if it’s false. Informativity of a statement doesn’t determine its truth, but affects it. Informativity depends on context.
Context-dependent truth eventually leads to interest-dependent truth (when you choose which context is more important to you), so this model leads to MC too.
Bold conjecture
Maybe any high-level analysis of information leads to Motivated Cognition.
Because inevitably you need to introduce some additional parameter to “truth”. Convenience, beauty, closeness to your interests, informativity. This additional parameter can be interpreted as MC.
Model 3: Bayesianism inside out
There’s more: you can say that Motivated Cognition is “Bayesianism turned inside out”.
Bayesianism assumes you can describe reality in terms of an infinity of micro atomic events with one parameter (probability).
I think you can describe reality in terms of a single macro fuzzy event with two parameters (probability and complexity). For example, this fuzzy event may sound like “everything is going to be alright” (optimism). You treat this macro event as a function and model all other events by iterating it. Iterations increase complexity and decrease probability. It’s like doing Fourier analysis, modeling an infinity of functions with a single function (the sinusoid).
Logical arguments increase bias
Real-life arguments control how much evidence you need to start questioning something. If you think that arguments have intrinsic meaning, you may unconsciously increase your biases. Take a look at this dialog:
A:If we were consequentialists, we would do horrible things. Turn everyone into orgasming blobs.
B:No, you don’t understand, a more complicated version of consequentialism fixes this.
A:I think there are still some problems...
B:Any concerns is, by definition, a consequence, so it gets factored in.
In this situation B needs N times more evidence to start questioning consequentialism compared to A. Or even an infinite amount of evidence. Because B probably treats his choice as “logic” and therefore puts little to no uncertainty in the choice. Yes, you can keep fixing consequentialism, but is it what you should’ve chosen in the first place?
Good or bad, MC gives you razor-sharp awareness of such situations. This topic is also one of the main topics in modern politics. (See “social privilege” and “social constructionism”.)
Part 6: MC in basic beliefs
Even when I think about my most trivial opinions, I feel that Motivated Cognition is my core reason of (not) believing in something. For example, take a look at this opinion analyzed by Julia Galef:
A friend of mine had a conversation with a guy recently in which he said that he can’t respect a girl if she goes home with him on the first date. My friend asked him why and he said “well, I mean it’s so risky, didn’t her parents ever teach her not to do something as stupid as going home with a guy she barely knows”.
I wasn’t there, but if I had been I would have asked him “Okay so you say the reason that you don’t respect girls who hook up with you on the first date is that it’s a risky thing for them to do, so let’s say a friend of yours went out for a walk at night and in a dangerous part of town all by himself—that’s risky—would you then lose respect for your friend?”. And obviously I didn’t ask him this because I wasn’t there but I strongly suspect the answer is “no”, you wouldn’t lose respect for a friend who did that. You might say “dude, that was stupid, you shouldn’t do that” but what this points to is the fact that his original reason that he gave for disrespecting girls who hook up with him early on was just a rationalization and that there’s actually probably a lot of other things going on there about purity and attitudes about traditional sexual mores.
I would blame the guy for violating Motivated Cognition: judging someone needlessly. You shouldn’t lose respect for anyone unless you’re forced to (and even then there are ways to not lose 100% of respect).
I can’t blame the guy for missing some obscure counterargument. Which doesn’t even work unless you want it to work. I believe in situations when a single argument can tilt you towards an opinion, but I don’t think this is such situation. If the guy answered “Yes”, his opinion would still be nonsensical.
Argumentation often feels to me needlessly ad-hoc and constraining what we really want to say. The guy’s opinion has bad vibes and we know where those vibes are coming from. Why are we trying to frame it as a miscalculation in some “logic game”?
Believing in Science
Even when I think about scientific theories (evolution, round Earth, General Relativity and etc.), “I believe because I want to believe” seems like the most natural explanation of my belief. MC encapsulates knowledge, emotions and attitude associated with the belief.
But why? Because I can’t quantitatively estimate how much I trust in different types of evidence. But I know how much I want to trust Science. And I know how much I need to believe in order to get anywhere. So I can’t fully buy the Bayesian explanation of my trust in Science.
Understanding consensus
MC helps me understand consensus about questions which I can’t fully check myself. For example, there’s a consensus in AI Alignment research that simple solutions to Alignment don’t work. One of the bad solutions is just encoding a bunch of good values in AI.
Motivated cognition helps me to accept that, because MC is easy to align with the consensus:
It’s not a solution I would want to work.
It doesn’t solve the problem I would want to solve.
It’s not the can of worms I want to think about. (Manually encoding values.)
MC also helps me to accept mathematical results, concepts and arguments, such as “actual infinity” and Cantor’s diagonal argument. If I didn’t know about MC, I could sympathize more with people who reject infinity. If you think that “logical reasoning” is all there is, then it’s hard to go and make such metaphysical commitments.
Propaganda
People think propaganda proves that we need to lean on facts and logical reasoning.
I think propaganda proves the exact opposite: logical reasoning is useless for our species unless you force everyone into some “logical totalitarianism” with a single true source of facts and an obligatory “school of correct thought”. If a person is affected by propaganda too much it’s a symptom that the person suffers from a deeper thinking mistake. How can you become hateful by encountering a couple of wrong facts?
I think the absence of MC is the reason why some people feel “forced to become hateful by facts”. When you forget what you “would want to be true”, you forget both what’s true and what you want.
Part 7: Emotional arguments
Here I just give two “emotional” arguments for Motivated Cognition. We’re nearing the end of the post.
Childhood argument
When you first learn the concept of “logical reasoning” as a kid, it’s nonsense. Because you can’t possibly know enough to make sense of the concept.
And you can argue that the concept never starts making sense later.
When you’re a kid, you use Motivated Cognition: you balance facts and desires. It makes sense because you have nothing else to do.
And you can argue that it never stops making sense.
Edge cases argument
In some edge cases I can physically feel how ethics and truth combine into a single concept, somewhat similar to the Chicken or the egg situation… in such cases I think:
Is this opinion too evil to be smart or too stupid to be ethical?
Such edge cases can give intuition for Motivated Cognition. Other edge cases (where it feels MC would 100% help) happen when misunderstanding between people is too great:
For example, imagine a person who thinks “humanity should become a mindless orgasming substance, because pleasure is good and a lot of pleasure is even better… you disagree because you haven’t tried” and doesn’t understand even a possibility of a problem with their opinion. I feel that Motivated Cognition could help to bridge misunderstanding in cases when nothing else can. “I want people to be something more complicated than orgasming blobs. I want to believe in fragile wishes that should be protected, because it’s an interesting possibility. I want to believe that choice exists and it matters.”
Part 8: evidence & priors for MC
The last two cruxes of my belief in MC:
The world is ridiculous
Something feels off to me about the world:
Our society devalues human experience (and, as a result, human life). In our society knowledge is power, and a hundred years of suffering is “worth” less than some mathematical theorem.
There’s too little original ideas.
Ideas die off too soon. Ideas barely get developed.
Society is too fragmented.
Intelligence is too fragmented. E.g. mathematical knowledge doesn’t give you nearly as much general intelligence as it could have, assuming math is the limit of abstract thinking. And if it’s not the limit, then what is?
Rationality is less effective than it could be. People find success without rationality. A lot of people are not eager to become full-on rationalists.
LW-rationality implicitly assumes that what you see today is all you can get. The only thing which remains is to optimize the hell out of it and run towards Singularity.
But I think something fundamental is missing. I think we missed something fundamental about human intelligence. Or maybe I know it:
MC and perception
I experience different qualities of a person as parts of a single experience.
And I experience qualities that are usually thought to be universal (e.g. “kindness”) as having different versions for different types of people. Like colors in a spectrum. Or vibes.
Now, what does it have to do with MC being true or false?
MC easily justifies such model of perception/personality. Because it’s the best and the most interesting possibility.
In MC truth is not an infinite list of facts, but something monistic. Which resonates with a monistic model of perception.
In MC you “normalize” everything, analyzing ideas in their own context, not in a universal epistemology. And you get my experiences when you “normalize” your perception.
One model of MC talks about choosing versions of concepts. And in my perception I have “versions of experiences”. I think that perception and argumentation are aspects of the same thought process.
The generalized version of MC leads to an even stronger analogy between perception and argumentation.
My experience of people completely contradicts the way we model intelligence and the way we “treat” people.
I wouldn’t expect to have such experience in a world where MC is false.
And this is my most important experience. This experience “by itself” nudges me towards MC even more, which I can’t explain without sharing the experience.
Note: probably I’m not updating purely on the possibility of my experience. If you want I can try to get into details.
Part 9: The summary
So, when I think about Motivated Cognition, I think that:
It’s important to explore even if it’s 100% wrong.
It’s a good way to analyze ideas, opinions and even facts.
It’s the limit of convergence of many philosophies and opinions. And a path to new ideas.
It’s the only idea I know of that could solve the main problem of informal epistemology.
It’s the only idea I know of that can model argumentation.
It explains my direct and most important experience.
MC is the only way to reach any important conclusion I know of.
And I don’t know why it should be wrong. If it’s wrong, we’re probably dead.
Part 10: The Multiverse of Truth
So, I seriously think that properly knowing Motivated Cognition makes a person ~2 times smarter. 2 times more understanding and perceptive. And I mean any person, be it someone outside of rational community or a master of Bayesian reasoning. I believe it’s true even if MC is wrong.
If you can be convinced of it, what can I do to convince you?
If you are already convinced, what can we do with this knowledge?
...
And if I’m right we can go further than x2, to x4. Because there’s a technique that encapsulates and generalizes Motivated Cognition.
What do those numbers mean, x2 and x4? How can I be sure about them? Here’s an analogy: imagine you are a very smart person raised outside of civilization. And one day you discover a computer with all kinds of modern programs. You become “at least 2 times smarter”. And later you discover the Internet with all the knowledge of humanity. You become “at least 2 times smarter” again. It’s not that your IQ doubles, it’s that you discover a whole new world of knowledge which is inevitably going to change your general reasoning. You go from walking speed to riding a bicycle to riding a car. From 17th century to 21th century.
Beyond Motivated Cognition
Remember I said that informal analysis of information inevitably requires to introduce some additional parameter to “truth”? To go further than MC we need to realize: any such additional parameter is still truth itself.
We need to stop thinking about “theories” about the world and start thinking about “statements” about the world.
Each statement is metaphysical, it exists beyond reality and any specific epistemology.
Each statement defines its own epistemology, its own notion of “truth” and exists in its own universe.
Any random statement is a logical fact. You decide if this fact is true or false based on context. Alternatively: you choose the universe where this fact is true or false.
Any two statements are equivalent in a certain epistemology/universe. (Any property of a statement is potentially equivalent to “truth”.) Note that all previous points follow from this one, because properties of statements are statements too.
This is the most natural/rich view on truth, because it’s more general than all other theories. Which are… parts of the truth, ironically. The Multiverse of Truth.
One unusual consequence of this view: “epistemologies” can be viewed as properties of statements. And “Motivated Cognition” can be interpreted as a property of your statements, not a choice you make. And it’s a pretty simple/natural property, so it pops up everywhere. That’s why your opinion can be described by MC even if you really used “logical reasoning” in order to come up with it. That’s why even physical theories can be approximated by MC in worlds where MC doesn’t work. That’s why logic can be emulated by MC.
So, Motivated Cognition is just a tiny part of completely unexplored properties of truth, reasoning methods and ways to enumerate truths. Once you realized it you can crunch ideas x4 more effectively (compared to normal reasoning). Can we go for x8? I think we absolutely can, if we discover enough new properties of truth. But we all need to start working together.
Shredding ideas
To quickly show you the difference between MC and the generalized method, take a look at this statement: “human ethics are made up, but there are still goals and good/bad distinction if you pursue joy and self-mastery”. Based on Friedrich Nietzsche’s philosophy.
With normal reasoning I would react like this:
I disagree.
With MC I would react like this:
I guess it’s optimistic for radical individualists. But it’s pessimistic in the sense that it destroys a lot of interesting concepts.
If I’m not a radical individualist, those ideas are not very useful to me.
With the generalized method I would react like this:
“There are some truths which follow from the individual’s happiness and self-improvement themselves. This is a logical fact in some epistemology.”
This idea is definitely useful for me even if I’m not a radical individualist and even if I don’t buy this idea too much. Actually, to deny this idea would be very pessimistic.
Now I see the connection between Buddhism and Nietzsche’s philosophy, even though one is about loss of ego and the other defends egoism. Because Buddhism focuses on the individual themselves.
The generalized method allows to shred ideas intro smaller pieces and extract more true bits much faster. (My post about the generalized method.)
P.S.
Without MC I wouldn’t be literate. Without predisposition to MC there wouldn’t be any thoughts in my head in the first place.
I can’t explain in mathematical terms why MC ends up being useful in this world, but I swear on my life that it is useful. I tried to show it from the first principles as much as I could, even though my first principles are not formal. I believe MC is one of the greatest ideas you can ever learn.
Motivated Cognition and the Multiverse of Truth
Related to: Truth seeking is motivated cognition and Contingency is not arbitrary, Do Sufficiently Advanced Agents Use Logic?
I have a weird belief, it’s both controversial and obscure:
Motivated Cognition (MC) has a serious chance to be a good epistemology.
Even if MC is wrong, it’s very useful for goals which are not about predicting reality. And MC is a path to ideas which don’t require MC, but which are hard to introduce without it.
Properly knowing the idea of MC and knowing how to apply it makes any person ~2 times smarter, more perceptive and understanding.
There is an unexpected duality between Bayesian reasoning and MC. I’m not confident in this part of my belief, but I think this should be true at least to some degree.
I want to have a rational and semi-rational discussion about MC. “Semi-rational” meaning you’re ready to suspend some judgement to fully appreciate the idea. If this idea is true it’s as important as the Scientific method or the Bayes’ theorem, so the stakes are very high. The post contains all cruxes of my belief, so I’m ready for a double-crux. Sorry if I write too arrogantly in a couple of places in the post, I just want to emphasize my point.
I don’t want to “shock” people with my belief. I know, it doesn’t sound nice, but it’s original and doesn’t hurt to entertain. And I realize that it also sounds too ambitious: what’s that part about “2 times smarter”, how can I know that and what does that even mean? I hope my post will be able to clarify this.
I believe that at the very least analyzing MC could help us understand people who criticize and disagree with rationalist community. I think this is by itself is important enough.
Intro: what is Motivated Cognition?
Two types of MC
Let’s differentiate two types of motivated cognition:
Local wishful thinking is about random wishes that don’t take into account facts. “I want to fly, so I jump off of a cliff towards injury and death.”
Global motivated cognition (MC) is about your most important wish + facts.
Logical thinking is about “logical inferences + facts”, MC is about “wishes + facts”.
You can’t use logic without all facts and all inferences. The same way you shouldn’t use MC without all facts and all wishes.
How can MC possibly work?
Note that typical criticisms of MC may be unfair:
Reality doesn’t care about any thinking method.
Wishes can be caused by investigating reality. Also, your “wishing mechanism” is an unexplored part of reality.
Reasons why MC can work without magic:
Maybe it just happens to work in our world, by a weird accident. MC could still count as empiricism and have objective truth. It wouldn’t be too crazy.
Maybe MC is equivalent to logical reasoning. Maybe MC is a complicated consequence of logical rules + basic facts about our world.
Maybe you can “exploit” human psychology, i.e. learn a bunch of true facts about the world by knowing some psychological facts and filtering people’s opinions.
In this post I give informal arguments for why 1 or 2 or 3 may be the case.
Example: a MC-based belief
Here’s an example of a MC-based belief:
Components of the belief:
On one hand, it’s simply a description of what I’m interested in: I’m interested in the core of intelligence which is shared among all people. It’s a bit like a tautology, as if I’m defining general intelligence as something which everyone shares.
On the other hand, it’s truly a belief, something that eventually leads to different expectations. And it can be refuted by reality (e.g. if certain individuals have a lot of “general intelligence” skills absent in other people).
I’m agnostic if my belief can or can’t be expressed in Bayesian epistemology: maybe I just have different priors and different reference classes than people who disagree with me. I don’t really know if my belief is MC-based or not.
I can imagine a possible world where people are equally smart/their intelligence is equally effective. A world which rewards people more often and more diversely. Something like “no free lunch theorem” world, where there’s a lot of puzzles for different types of people. Such world is deeply important to me, so it feels as if just it’s logical possibility should matter.
Note that the same reasoning doesn’t apply to the belief “God exists” for a couple of reasons: (1) it’s a weird belief, because God is supposed to exist “beyond” our world (2) we have too great evidence of absence for God (3) you can argue that existence of God is not a fundamental wish, wishes about people should come first.
Part 1: Motivated Cognition is interesting
This part of the post explains why Motivated Cognition is interesting to study. Even if it’s wrong.
Ethics
Ethics is an example of solid MC.
For example, we don’t kill each other because we don’t want to.
But that doesn’t mean it’s an arbitrary wish and we can just start wanting otherwise.
Politics
Many people link politics to MC.
But people do this in caricature ways, without actually analyzing MC in general.
Human biases
MC is considered to be one of the human biases, obviously.
And yet people don’t actually think through what MC is, how it could work and what it implies.
Legal reasoning
Legal reasoning and a true crime example.
The whole point of making laws and arguing in court is to reverse-engineer paths to conclusions our society wants to reach.
Physics
MC plays a big role in Physics (of all fields) in the shape of pursuing beauty.
The Truth About Beauty in Physics
Religion
MC gets linked to religion, this is obvious too. The link brings up four types of questions: who would actually want to believe in [religion X], is it convenient? what does a believer allowed to want as the best possible thing [e.g. “paradise”]? what can God want? what are reasons to not want God to exist?
I don’t know how much of it was explored, given that people don’t treat MC seriously.
Philosophy
Expected utility (Pascal Wager)
The idea of expected utility can sometimes give justification to MC.
See: Pascal’s wager, Pascal’s mugging, Buddhist wager argument for rebirth and Atheist’s wager.
I’m confused why this topic barely went beyond religion.
Anthropics
Anthropic reasoning is sometimes related to MC in a roundabout and very weak way.
Example 1. If you want to live in a paradise, make sure that you can be born only in a paradise.
Example 2. If parallel worlds exist, then there are versions of you that are infinitely lucky. And versions that are infinitely unlucky (agonizing minds that are barely alive).
Induction
Universal prior may be affected by beings in different universes who pursue their own goals. This is a very strange example of “motivated” cognition.
Rationality
Logical decision theory
In Logical decision theory you decide what logical facts are true based on profit.
It is related to MC, even if it’s not meant to be.
Modest epistemology
You could say that rejecting modest epistemology is an epistemological decision based on MC. You agree to the risk of being wrong in order to use your brain to the fullest.
In general, you can argue that logical reasoning just can’t handle all the (meta-)epistemological problems. Parallel universes, infinities, mugging, modesty… at some point you just have to say “I want to live a certain life. So I’m going to assume I can live it. I’m not going to keep entertaining the possibility that I’m a Boltzman brain or that a random mugger is a Matrix Lord. Even if I have to violate Bayesian updating.”
Wishes are not trivial pt. 1
Rational community knows that desire for immortality or “infinite pleasure” are not trivial topics.
Stories about evil genies and monkey paws teach us that wishes are not trivial.
Ethical paradoxes teach us that wishes are not trivial. Orthogonality Thesis (AI) is about wishes and it contradicts our intuition.
So, at one hand we laugh at MC, but on the other hand we are deeply confused about wishes because they are as complicated as rocket science. In my opinion it’s almost doublethink.
Wishes are not trivial pt. 2
Reality teaches us that wishes are not trivial.
A lot of people die tragically because the bad guys simply fail to want good things for themselves.
There was a time the bad guys were just kids who wanted good things… but at some point the “wishing mechanism” got broken. And now innocent people die because the bad guys pursue some abstractions such as “infinite wealth”, “infinite power” or “infinitely conservative values”. People turn into paperclip maximizers.
I don’t understand how people can laugh at MC while knowing this. Even simply wishing something actually good for yourself is not trivial.
Conclusion
So, if you study ethics, politics, legal systems, physics, religion, human biases or philosophy it would be rational for you to be interested in Motivated Cognition in general.
MC is also related to skills. When you learn to move your body you learn to balance wishes and facts. The same thing when you learn to make and execute plans. When you play chess you often have to play expecting the opponent to make a mistake, i.e. expecting an illogical thing to happen: fake it till you make it.
Part 2: MC and communication
This part is about the way Motivated Cognition could help us in communication. Even if it’s wrong. MC helps me immensely and I just can’t imagine living without it.
MC helps me understand and remember opinions
My goal here is not to accuse people of wishful thinking. I’m just saying I use MC to remember and model opinions of others.
Roger Penrose
Roger Penrose has a pretty complicated theory (Orchestrated objective reduction). It’s about consciousness, but also about purely mathematical topics and about two physical theories (quantum mechanics + general relativity). I would have a hard time remembering the theory. But I know an easy trick to remember it:
Imagine the best possibility (for humans) consistent with today’s physics. Imagine the best (for humans) mathematical facts.
You automatically get Penrose’s theory.
Eliezer Yudkowsky
Eliezer Yudkowsky has a pretty complicated opinion about ethics (Morality as Fixed Computation). It’s about algorithms, but also about probability, math and epistemology. People who know much more about math and algorithms than me struggle to understand Eliezer’s idea. However, I know an easy trick to make everything clear:
Imagine the best (for humans) version of ethics consistent with Eliezer’s simpler beliefs. Related to Eliezer’s interests.
Imagine the best version of ethics toned down by Eliezer’s “pessimism”.
You get something like “ethical statements are metaphysical (algorithms), they have the most interesting epistemological properties of everything (math, probability and counterfactual worlds)”. Now I can recite Eliezer’s opinion even in my sleep.
MC and philosophies
Immanuel Kant
Can you easily remember Immanuel Kant’s philosophy in enough detail? I bet many of you can’t, especially if you disagree with Kant and don’t like philosophy in general. I mean, his philosophy is massive, we have A LOT of ground to cover. However, MC helps me to simplify things:
Our perception is more important than the absolute truth. This is a logical necessity. (That’s interesting and good for us.)
Morality combines choice and obligation. (That’s the most interesting possibility.)
Moral law is unconditional. (That’s convenient.)
Morality = logic. Being a bad guy is simply inconsistent. (That’s convenient.)
There is a mix between a priori and a posteriori knowledge. (That’s the most interesting possibility.)
Moral law comes from not treating other people as tools. (That’s very simple and beautiful.)
That’s it, we covered most of it. Simply being smart and being an optimist gives you a very good approximation of Kant’s philosophy.
Gottfried Leibniz
Check this out: Monadology. Simple substances, fractals, “causality doesn’t exist” and God. Do you understand everything, does everything click together? I may be mistaken, but this is obscure even in the field of philosophy. However, we have MC to simplify everything:
Monism (“everything is a single thing”) is convenient. As is idealism.
Fractals are convenient (because they are the same on all levels).
Causality is not convenient, it’s complicated to keep track of and boring. “Harmony” is a more interesting idea.
God is the sum of all important things. That’s simple and beautiful.
That’s it, we covered everything. Again, being smart and being an optimist (and not being afraid to study esoteric ideas) gives you a good approximation. MC allows you to crunch different philosophies in seconds.
Moreover, using MC you can easily merge Kant’s and Leibniz’s philosophies. Which sadly hasn’t been done and it’s a shame. If it had been done, I probably wouldn’t even need to explain and defend MC.
Classification
So, MC helps to understand, simplify and classify opinions. Not bad.
MC also makes you curios: what is the point of convergence of different opinions? It definitely exists since we can evaluate at least some opinions as less or more optimistic. What opinion do you get when you don’t “tone down” MC by pessimism or specific interests?
But we haven’t finished yet. It gets weirder.
MC “predicts” Science
There’s a comic by SMBC about Quantum Computing:
https://www.smbc-comics.com/comic/the-talk-3
The moral of the story is supposed to be something like “popular sources are misleading, precise math is the best”. (Mom: “For generations physicists had a custom when discussing these matters with outsiders they wanted to avoid being too… graphic. Too explicit. “Gulp” Mathematically precise.”)
However, the son’s mistaken interpretation… simply sounds less interesting than the truth. I wouldn’t want it to be true. If you give true bits and false bits, it’s easy to filter out the false bits using MC. Not necessarily from the first try: you can deal with conflicting information using MC.
So, MC even helps me remember Science. Because misconceptions are boring. And if a misconception is truly interesting… then it’s worth making.
Language cooperation and MC
Gricean maxims tell us that conversation is about cooperation. This fact lets us skip a lot of information in our speech and yet understand each other perfectly well. Because we have an agreement to tell each other relevant information and not hide anything important.
For me MC is a natural extension of Gricean maxims:
If we discuss (X), we should discuss the best and the most interesting version of (X).
If we think that the best version of (X) is impossible, we should at least mention it anyway.
If you don’t even mention the possibility of the best (X), it means you are not properly aware of it.
Living in a world where people are not familiar with MC sometimes does feel as crazy as living in a world without Gricean maxims. To the point that communication feels impossible.
Steelmanning, avoiding weak men and being charitable somewhat compensates the absence of MC, but most of the time people don’t apply those techniques to the fullest extent.
Part 3: MC and research
This part is about the way Motivated Cognition could help us in research. Even if it’s wrong.
The best solution
Let’s say you’re solving a problem. It may be useful to imagine a perfect solution, a perfect outcome of solving the problem. Even if it’s impossible.
However, to imagine the perfect solution to the problem A you may need to be aware of the perfect solutions to the problems B, C, D, E, F, G and H.
And if you never do MC in any way, shape or form, then you may be simply unable to do it.
Pervasive effects
I think that total neglect of Motivated Cognition has a very negative effect on society:
People avoid MC. People forget what “desirability” means.
Having forgotten about “desirability”, people stop desiring to find original theories. People forget what “originality” means.
Without MC people’s empathy towards opinions of each other suffers. People become less aware of each other’s ideas. I mean truly, deeply aware.
The absence of “desirability” and “originality” metrics and “empathy” negatively affects people’s general comprehension of theories and ideas.
So, the pipeline goes like this: “no desirability → no originality and empathy → bad comprehension”.
Originality is really dead at this point: “original ideas” is not a concept that exists in our society. People learn a couple of original ideas in physics, math, philosophy and fiction… and then kind of forget that they can expect to find more original ideas? Or that it’s a thing you can look for? Instead people try to estimate “truthfulness” directly and fail horribly.
Logical decision theory is arguably the most original idea in Rationality and one of the most original ideas in philosophy in general, but even LW-community doesn’t notice it.
Studying perfection
There’s one rule I think about: “if you don’t assume that X is perfect, you don’t study properties of X, you study something else” or “only perfect things can be studied”.
If you don’t assume that human reasoning is perfect, you don’t study human reasoning, you study math or evolutionary psychology. Something else that you do consider to be “perfect”.
If you don’t assume that humans are perfect, you don’t study humans, you study evolution on the planet Earth.
If you don’t assume that our Universe is perfect, you don’t study our Universe, you study Turing Machines. (e.g. Conway’s Game of Life)
If you can’t imagine that Motivated Cognition can be a perfect epistemology, then you don’t really get curios about Motivated Cognition.
So, if you really want to study X, your reasoning may seem like MC to other people. See also what I wrote about Gricean maxims. For other people MC is insanity, but for me MC is about basic rules of dealing with information.
Part 4: Epistemologies are broken
You may warm up towards Motivated Cognition if you notice that the intuition for “logical reasoning” may be coming from a completely wrong and confused place. A very big part of my belief in MC is based on my disbelief in what people label as “logical reasoning”.
Folk Epistemology
Imagine for a second that Bayesian Epistemology doesn’t exist. Do you notice something strange in the world?
I do: people don’t have any epistemology (null, zero). And yet every single person is dead sure that some mythical “logical reasoning” exists. Not a single person asks any questions.
People don’t understand the difference between formal and informal logics.
People don’t realize the difference between logic and epistemology.
People don’t study argumentation even though they think it’s a key skill. In particular, people don’t study legal reasoning (argumentation on the level of the law).
People don’t acknowledge that the field of “correct reasoning”/informal logic doesn’t exist.
Even in philosophy people barely question logic. Not in a way that would actually affect the field. So, we do have logical pluralism and logical nihilism, but even the article about them is filled with unquestioned argumentation...
Nobody asks “Wait, when did I learn logic? What is it? How can I expect to be any good at it?”
Folk’s belief in “logical reasoning” is far stranger than any religion. Religious people at least disclose the (supposed) sources of their beliefs and what it means to them. I’m deeply confused by folk’s belief in logic, I can’t come up even with an uncharitable explanation. You need zero self-awareness and zero debate experience in order to not question epistemology.
Is Bayesian Epistemology better?
I think Bayesianism is a brilliant formal epistemology. At least it actually exists.
However, in informal areas of human reasoning Bayesianism has a lot of problems. Rationalists may repeat and even “justify” folk’s mistakes:
Instead of looking at the problems a bayesian may double down on the fact that “Bayesian axioms are inescapable, they are pure logic”. Similar to how folks double down on “logic is logic, it can’t be wrong” without noticing that logic isn’t even an epistemology.
Just like formal logic, Bayesianism depends on the way you split reality into labels. Reference class problem.
Bayesianism doesn’t model your reasoning and informal argumentation.
A straw bayesian may reason like this: “Yes, I guess I don’t know how informal argumentation works. But who needs it when I have the correct epistemology? And anyway, if somebody forced me with a gun to make bets on arguments, I would make those bets. So, I guess I already know how everything works!”
Sequences (the foundational text) don’t actually analyze argumentation.
Even when bayesians discuss saving the world, they argue like everyone does.
Folks and philosophers often question their knowledge and starting assumptions, but rarely question their inferences. But in informal reasoning the process of inference is more complicated than the rules of formal logic. So, paradoxically, actual argumentation never gets questioned. The most interesting parts of arguments never get questioned. And I’m afraid that rationalists inherit this blind spot. It seems rationalist rarely ask “Wait, can I really apply lessons from Bayes’ theorem to informal reasoning?”
The main problem of epistemology
The main problem of informal epistemology is splitting the reality into labels. Without it you can’t apply formal logic or Bayes’ theorem. But there’s no rules of attaching labels to real things.
Motivated Cognition is the only idea I know of that has at least a chance to solve the problem of labels. Because pairs like (label, real thing) have different convenience factors. Which means we can differentiate and choose such pairs.
Part 5: MC has roots in models of argumentation
If you try to model actual human argumentation, you may see how it leads to Motivated Cognition.
I discovered the models below because of MC, not the other way around. However, they retroactively made MC self-evident. Sorry that I don’t go into details of each specific model right now (the post would be too big).
Model 0: arguments are relative
Imagine that arguments don’t have any intrinsic power, even intrinsic meaning. It’s not so hard to imagine. If this is true, you need some meta-thing to give arguments power and meaning.
Motivation can be this meta-thing.
Irony
The most ironic thing may be that “logical reasoning” itself is often best modeled using MC, not logic. Let’s illustrate that and also give an example of an argument “without intrinsic meaning”:
(a quote from “Trust in God, or, The Riddle of Kyon” by Eliezer Yudkowsky)
Maybe the atheist’s argument makes a lot of sense. Maybe it doesn’t make any sense. Why posit a (false) dilemma? It’s probable that the atheist’s argument is best modeled using MC, not logic:
“You should focus on your single most important wish. So, wish to love the very real people near you, not abstract otherworldly entities.”
What do you do when MC emulates logical reasoning better than logic? It covers the hole in the argument which would take a lot of debates to cover. (Related: Warrant and Tortoise vs. Achilles.) And it would save a lot of time for the theologian, too.
A more radical claim: maybe “logical arguments” simply don’t exist in informal argumentation, only MC-based ones do. Without MC you can’t make sense of any informal debate.
Model 1: choosing concepts
Model 1. Any concept such as “intelligence” has an infinity of versions. In order to reason about the world you need to choose what versions are the most important. You choose based on your previous choices (interests).
This model easily leads to MC.
Model 2: looking for information
Model 2. Instead of theories about the world you can analyze “statements” about the world. A statement can contain a lot of true information even if it’s false. Informativity of a statement doesn’t determine its truth, but affects it. Informativity depends on context.
Context-dependent truth eventually leads to interest-dependent truth (when you choose which context is more important to you), so this model leads to MC too.
Bold conjecture
Maybe any high-level analysis of information leads to Motivated Cognition.
Because inevitably you need to introduce some additional parameter to “truth”. Convenience, beauty, closeness to your interests, informativity. This additional parameter can be interpreted as MC.
Model 3: Bayesianism inside out
There’s more: you can say that Motivated Cognition is “Bayesianism turned inside out”.
Bayesianism assumes you can describe reality in terms of an infinity of micro atomic events with one parameter (probability).
I think you can describe reality in terms of a single macro fuzzy event with two parameters (probability and complexity). For example, this fuzzy event may sound like “everything is going to be alright” (optimism). You treat this macro event as a function and model all other events by iterating it. Iterations increase complexity and decrease probability. It’s like doing Fourier analysis, modeling an infinity of functions with a single function (the sinusoid).
Logical arguments increase bias
Real-life arguments control how much evidence you need to start questioning something. If you think that arguments have intrinsic meaning, you may unconsciously increase your biases. Take a look at this dialog:
A: If we were consequentialists, we would do horrible things. Turn everyone into orgasming blobs.
B: No, you don’t understand, a more complicated version of consequentialism fixes this.
A: I think there are still some problems...
B: Any concerns is, by definition, a consequence, so it gets factored in.
In this situation B needs N times more evidence to start questioning consequentialism compared to A. Or even an infinite amount of evidence. Because B probably treats his choice as “logic” and therefore puts little to no uncertainty in the choice. Yes, you can keep fixing consequentialism, but is it what you should’ve chosen in the first place?
Good or bad, MC gives you razor-sharp awareness of such situations. This topic is also one of the main topics in modern politics. (See “social privilege” and “social constructionism”.)
Part 6: MC in basic beliefs
Even when I think about my most trivial opinions, I feel that Motivated Cognition is my core reason of (not) believing in something. For example, take a look at this opinion analyzed by Julia Galef:
How to spot a rationalization
I would blame the guy for violating Motivated Cognition: judging someone needlessly. You shouldn’t lose respect for anyone unless you’re forced to (and even then there are ways to not lose 100% of respect).
I can’t blame the guy for missing some obscure counterargument. Which doesn’t even work unless you want it to work. I believe in situations when a single argument can tilt you towards an opinion, but I don’t think this is such situation. If the guy answered “Yes”, his opinion would still be nonsensical.
Argumentation often feels to me needlessly ad-hoc and constraining what we really want to say. The guy’s opinion has bad vibes and we know where those vibes are coming from. Why are we trying to frame it as a miscalculation in some “logic game”?
Believing in Science
Even when I think about scientific theories (evolution, round Earth, General Relativity and etc.), “I believe because I want to believe” seems like the most natural explanation of my belief. MC encapsulates knowledge, emotions and attitude associated with the belief.
But why? Because I can’t quantitatively estimate how much I trust in different types of evidence. But I know how much I want to trust Science. And I know how much I need to believe in order to get anywhere. So I can’t fully buy the Bayesian explanation of my trust in Science.
Understanding consensus
MC helps me understand consensus about questions which I can’t fully check myself. For example, there’s a consensus in AI Alignment research that simple solutions to Alignment don’t work. One of the bad solutions is just encoding a bunch of good values in AI.
Motivated cognition helps me to accept that, because MC is easy to align with the consensus:
It’s not a solution I would want to work.
It doesn’t solve the problem I would want to solve.
It’s not the can of worms I want to think about. (Manually encoding values.)
MC also helps me to accept mathematical results, concepts and arguments, such as “actual infinity” and Cantor’s diagonal argument. If I didn’t know about MC, I could sympathize more with people who reject infinity. If you think that “logical reasoning” is all there is, then it’s hard to go and make such metaphysical commitments.
Propaganda
People think propaganda proves that we need to lean on facts and logical reasoning.
I think propaganda proves the exact opposite: logical reasoning is useless for our species unless you force everyone into some “logical totalitarianism” with a single true source of facts and an obligatory “school of correct thought”. If a person is affected by propaganda too much it’s a symptom that the person suffers from a deeper thinking mistake. How can you become hateful by encountering a couple of wrong facts?
I think the absence of MC is the reason why some people feel “forced to become hateful by facts”. When you forget what you “would want to be true”, you forget both what’s true and what you want.
Part 7: Emotional arguments
Here I just give two “emotional” arguments for Motivated Cognition. We’re nearing the end of the post.
Childhood argument
When you first learn the concept of “logical reasoning” as a kid, it’s nonsense. Because you can’t possibly know enough to make sense of the concept.
And you can argue that the concept never starts making sense later.
When you’re a kid, you use Motivated Cognition: you balance facts and desires. It makes sense because you have nothing else to do.
And you can argue that it never stops making sense.
Edge cases argument
In some edge cases I can physically feel how ethics and truth combine into a single concept, somewhat similar to the Chicken or the egg situation… in such cases I think:
Such edge cases can give intuition for Motivated Cognition. Other edge cases (where it feels MC would 100% help) happen when misunderstanding between people is too great:
For example, imagine a person who thinks “humanity should become a mindless orgasming substance, because pleasure is good and a lot of pleasure is even better… you disagree because you haven’t tried” and doesn’t understand even a possibility of a problem with their opinion. I feel that Motivated Cognition could help to bridge misunderstanding in cases when nothing else can. “I want people to be something more complicated than orgasming blobs. I want to believe in fragile wishes that should be protected, because it’s an interesting possibility. I want to believe that choice exists and it matters.”
Part 8: evidence & priors for MC
The last two cruxes of my belief in MC:
The world is ridiculous
Something feels off to me about the world:
Our society devalues human experience (and, as a result, human life). In our society knowledge is power, and a hundred years of suffering is “worth” less than some mathematical theorem.
There’s too little original ideas.
Ideas die off too soon. Ideas barely get developed.
Society is too fragmented.
Intelligence is too fragmented. E.g. mathematical knowledge doesn’t give you nearly as much general intelligence as it could have, assuming math is the limit of abstract thinking. And if it’s not the limit, then what is?
Rationality is less effective than it could be. People find success without rationality. A lot of people are not eager to become full-on rationalists.
LW-rationality implicitly assumes that what you see today is all you can get. The only thing which remains is to optimize the hell out of it and run towards Singularity.
But I think something fundamental is missing. I think we missed something fundamental about human intelligence. Or maybe I know it:
MC and perception
I experience different qualities of a person as parts of a single experience.
And I experience qualities that are usually thought to be universal (e.g. “kindness”) as having different versions for different types of people. Like colors in a spectrum. Or vibes.
Now, what does it have to do with MC being true or false?
MC easily justifies such model of perception/personality. Because it’s the best and the most interesting possibility.
In MC truth is not an infinite list of facts, but something monistic. Which resonates with a monistic model of perception.
In MC you “normalize” everything, analyzing ideas in their own context, not in a universal epistemology. And you get my experiences when you “normalize” your perception.
One model of MC talks about choosing versions of concepts. And in my perception I have “versions of experiences”. I think that perception and argumentation are aspects of the same thought process.
The generalized version of MC leads to an even stronger analogy between perception and argumentation.
My experience of people completely contradicts the way we model intelligence and the way we “treat” people.
I wouldn’t expect to have such experience in a world where MC is false.
And this is my most important experience. This experience “by itself” nudges me towards MC even more, which I can’t explain without sharing the experience.
Note: probably I’m not updating purely on the possibility of my experience. If you want I can try to get into details.
Part 9: The summary
So, when I think about Motivated Cognition, I think that:
It’s important to explore even if it’s 100% wrong.
It’s a good way to analyze ideas, opinions and even facts.
It’s the limit of convergence of many philosophies and opinions. And a path to new ideas.
It’s the only idea I know of that could solve the main problem of informal epistemology.
It’s the only idea I know of that can model argumentation.
It explains my direct and most important experience.
MC is the only way to reach any important conclusion I know of.
And I don’t know why it should be wrong. If it’s wrong, we’re probably dead.
Part 10: The Multiverse of Truth
So, I seriously think that properly knowing Motivated Cognition makes a person ~2 times smarter. 2 times more understanding and perceptive. And I mean any person, be it someone outside of rational community or a master of Bayesian reasoning. I believe it’s true even if MC is wrong.
If you can be convinced of it, what can I do to convince you?
If you are already convinced, what can we do with this knowledge?
...
And if I’m right we can go further than x2, to x4. Because there’s a technique that encapsulates and generalizes Motivated Cognition.
What do those numbers mean, x2 and x4? How can I be sure about them? Here’s an analogy: imagine you are a very smart person raised outside of civilization. And one day you discover a computer with all kinds of modern programs. You become “at least 2 times smarter”. And later you discover the Internet with all the knowledge of humanity. You become “at least 2 times smarter” again. It’s not that your IQ doubles, it’s that you discover a whole new world of knowledge which is inevitably going to change your general reasoning. You go from walking speed to riding a bicycle to riding a car. From 17th century to 21th century.
Beyond Motivated Cognition
Remember I said that informal analysis of information inevitably requires to introduce some additional parameter to “truth”? To go further than MC we need to realize: any such additional parameter is still truth itself.
We need to stop thinking about “theories” about the world and start thinking about “statements” about the world.
Each statement is metaphysical, it exists beyond reality and any specific epistemology.
Each statement defines its own epistemology, its own notion of “truth” and exists in its own universe.
Any random statement is a logical fact. You decide if this fact is true or false based on context. Alternatively: you choose the universe where this fact is true or false.
Any two statements are equivalent in a certain epistemology/universe. (Any property of a statement is potentially equivalent to “truth”.) Note that all previous points follow from this one, because properties of statements are statements too.
This is the most natural/rich view on truth, because it’s more general than all other theories. Which are… parts of the truth, ironically. The Multiverse of Truth.
One unusual consequence of this view: “epistemologies” can be viewed as properties of statements. And “Motivated Cognition” can be interpreted as a property of your statements, not a choice you make. And it’s a pretty simple/natural property, so it pops up everywhere. That’s why your opinion can be described by MC even if you really used “logical reasoning” in order to come up with it. That’s why even physical theories can be approximated by MC in worlds where MC doesn’t work. That’s why logic can be emulated by MC.
So, Motivated Cognition is just a tiny part of completely unexplored properties of truth, reasoning methods and ways to enumerate truths. Once you realized it you can crunch ideas x4 more effectively (compared to normal reasoning). Can we go for x8? I think we absolutely can, if we discover enough new properties of truth. But we all need to start working together.
Shredding ideas
To quickly show you the difference between MC and the generalized method, take a look at this statement: “human ethics are made up, but there are still goals and good/bad distinction if you pursue joy and self-mastery”. Based on Friedrich Nietzsche’s philosophy.
With normal reasoning I would react like this:
I disagree.
With MC I would react like this:
I guess it’s optimistic for radical individualists. But it’s pessimistic in the sense that it destroys a lot of interesting concepts.
If I’m not a radical individualist, those ideas are not very useful to me.
With the generalized method I would react like this:
“There are some truths which follow from the individual’s happiness and self-improvement themselves. This is a logical fact in some epistemology.”
This idea is definitely useful for me even if I’m not a radical individualist and even if I don’t buy this idea too much. Actually, to deny this idea would be very pessimistic.
Now I see the connection between Buddhism and Nietzsche’s philosophy, even though one is about loss of ego and the other defends egoism. Because Buddhism focuses on the individual themselves.
The generalized method allows to shred ideas intro smaller pieces and extract more true bits much faster. (My post about the generalized method.)
P.S.
Without MC I wouldn’t be literate. Without predisposition to MC there wouldn’t be any thoughts in my head in the first place.
I can’t explain in mathematical terms why MC ends up being useful in this world, but I swear on my life that it is useful. I tried to show it from the first principles as much as I could, even though my first principles are not formal. I believe MC is one of the greatest ideas you can ever learn.