So far in the series, we defined valence (Post 1) and talked about how it relates to the “normative” world of desires, values, preferences, and so on (Post 2). Now we move on to the “positive” world of beliefs, expectations, concepts, etc. Here, valence is no longer the sine qua non at the center of everything, as it is in the “normative” magisterium. But it still plays a leading role. Actually, two leading roles!
Section 3.2 distinguishes two paths by which valence affects beliefs: first, in its role as a control signal, and second, in its role as “interoceptive” sense data, which I discuss in turn:
Section 3.3 discusses how valence-as-a-control-signal affects beliefs. This is the domain of motivated reasoning, confirmation bias, and related phenomena. I explain how it works both in general and through a nuts-and-bolts toy model. I also elaborate on “voluntary attention” versus “involuntary attention”, in order to explain anxious rumination, which goes against the normal pattern (it involves thinking about something despite a strong motivation not to think about it).
Section 3.4 discusses how valence-as-interoceptive-sense-data affects beliefs. I argue that, if concepts are “clusters in thingspace”, then valence is one of the axes used by this clustering algorithm. I discuss how this relates to various difficulties in modeling and discussing the world separately from how we feel about it, along with the related “affect heuristic” and “halo effect”.
Section 3.5 briefly muses on whether future AI will have motivated reasoning, halo effect, etc., as we humans do. (My answer is “yes, but maybe it doesn’t matter too much”.)
Section 3.6 is a brief conclusion.
3.2 Two paths for normative to bleed into positive
We have two paths by which valence can impact the world-model (a.k.a. “Thought Generator”): the normative path (upward black arrow) that helps control which thoughts get strengthened versus thrown out, and the positive path (curvy green arrow) that treats valence as one of the input signals to be incorporated into the world model. Corresponding to these two paths, we get two ways for valence to impact factual beliefs:
Motivated reasoning / thinking / observing and confirmation bias—related to the upward black arrow, and discussed in §3.3 below;
The entanglement of valence into our conceptual categories, which makes it difficult to think or talk about the world independently from how we feel about it—related to the curvy green arrow, and discussed in §3.4 below.
Let’s proceed with each in turn!
3.3 Motivated reasoning / thinking / observing, including confirmation bias
Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias—our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.… —Scott Alexander
3.3.1 Attention-control and motor-control provide loopholes through which desires can manipulate beliefs
Wishful thinking—where you believe something because it would be nice if it were true—is generally maladaptive: Imagine spending all day opening your wallet, over and over, expecting each time to find it overflowing with cash. We don’t actually do that, which is an indication that our brains have effective systems to mitigate (albeit not eliminate, as we’ll see) wishful thinking.
How do those mitigations work?
As discussed in Post 1, the brain works by model-based reinforcement learning (RL). Oversimplifying as usual, the “model” (predictive world-model, a.k.a. “Thought Generator”) is trained by self-supervised learning, i.e. predicting immediately-upcoming sensory data and updating on the errors. Meanwhile, the RL part involves using actor-critic RL for decision-making.
This is great! “Self-supervised learning for the predictive world-model” is a fancy way to say:
our beliefs are updated in the direction that increases agreement with our sensory inputs,
…and thus our beliefs are not necessarily updated in the direction that would increase RL reward signals.
Continuing with the previous example, if you look inside your wallet, and it’s not full of cash, then your world-model automatically updates itself to incorporate this fact, even if you wanted that fact to not be true.
But this system does have a loophole: the decisions you make can impact the input data that goes into self-supervised training of the world-model, by influencing what you’re looking at, what you’re paying attention to, what you read, who you talk to, where you are, and so on. And, as machine learning researchers know well, training data plays a big role in determining a trained model’s behavior.
For example, let’s suppose that, for some reason, I want to strongly and sincerely believe that socialists tend to be violent. Now, I can’t just will myself to strongly and sincerely believe that socialists are violent, in the absence of any evidence whatsoever.[1] But what I can do is:
Use my motor-control levers to search the internet for news stories about violent socialists;
Use my motor- and attention-control levers to read the top 50 stories that show up in that search;
…And then my world-model will take it from there! Thanks to self-supervised learning from all this input data, my world-model will grow a strong and sincere belief that socialists tend to be violent.
3.3.2 Motivated reasoning and confirmation bias
Let’s assume I currently believe X, and let’s also assume that the idea of “myself doubting X” is demotivating / unappealing (negative valence).
…And by the way, this demotivating-ness is quite common! Maybe doubting X would cause my friends to disrespect me, or maybe I don’t like being confused and/or putting in the effort to rebuild a new understanding of everything related to X, or maybe I think of myself as knowledgeable and would find it unpleasant to see myself-in-hindsight as having been wrong about X, and so on. (More on this in §3.3.6 below.)
Regardless of the reason, this situation tends to lead to motivated reasoning and confirmation bias. We have a lifetime of experience with which to learn the metacognitive pattern “If I read a good argument against X, and if I really try in good faith to make sense of it and chew over it, then there’s a decent chance that I’ll start doubting X”. And the latter part of that thought is demotivating! And since we’re good (too good!) at long-term planning and means-end reasoning, we will find that the idea of reading and deeply engaging with the good argument against X is demotivating from the start, so we probably won’t do it.
That latter demotivating-ness might well be ego-dystonic. Maybe I want to want to deeply engage with good arguments against X. But, ego-dystonic or not, the demotivating-ness is still there, and still contributing to my motor-control and attention-control outputs.
Now I’ll jump into a more detailed model of one possible way this phenomenon can unfold:
3.3.3 A nuts-and-bolts model: It generally feels easy and natural to brainstorm / figure out how something might happen, when you want it to happen. Conversely, it generally feels hard and unnatural to figure out how something might happen, when you want it to not happen
Let’s say I want to crack open a coconut and eat the meat inside, but I don’t know how. So I brainstorm.
Brainstorming—now what does that entail, exactly? Well, I’m ultimately trying to find a plan, in my mind’s eye, that looks like: “[fill in the blank] and then the coconut will be open”, and where the plan seems plausible to me (i.e., compatible with my world-model). As soon as I can imagine such a plan, I can then try executing it, with the expectation that it might work.
In the simplest case, my brain will kinda grasp onto the end part of the plan “…and then the coconut is open!”, and keep that piece-of-a-thought active throughout this process. Then my brain will go rummaging around for possible beginning parts of the plan such that the whole plan, altogether, forms a plausible story.
The whole process may take a while. But there’s a subtlety making it work: The thoughts that I have mid-brainstorming are all positive valence, because the brainstorming target (“…and then the coconut will be open!”) is itself positive valence—it’s something that I want to happen. And the brainstorming target is active in my brain throughout this brainstorming process, which by linearity (§2.4.1) adds a heap of positive valence to all the thoughts constituting the brainstorming process.
And if all the thoughts I have during brainstorming are positive valence, then I’m likely to carry on brainstorming, rather than flinching away from the brainstorming process as soon as possible. That follows from the most basic properties of valence—see §1.3.
In other words, the appeal of the brainstorming destination is the fuel powering the whole brainstorming process.
The flip side of this coin is that it feels demotivating to brainstorm how something might happen, when you want it to not happen.
Let’s take red-teaming as an example. Suppose that I’m designing a widget, and I decide to brainstorm reasons why, if we build the widget following this design, it might fail.
Just like the above, the brainstorming process will involve keeping the idea “…and then the widget will fail!” active in my mind, and rummaging around for plausible stories whose ending looks like that.
Unfortunately, “…and then the widget will fail!” is a highly demotivating piece-of-a-thought. (If my widget design has a flaw, then it’s embarrassing, and I have to do a ton of work to fix it!) So again by linearity (§2.4.1), the whole brainstorming process is going to inherit that negative valence. And thus, by the basic properties of valence (§1.3), I’m likely to flinch away from doing this brainstorming process in the first place. Or if I do manage to begin it, I’ll be inclined to quit at the slightest excuse.
So most people, most of the time, don’t instinctively red-team their plans. It doesn’t come naturally. People literally offer training courses. The reason it’s called a “red team” is that it’s often a different team of people, who (in their role) want the plan to fail, and thus have valence on their side.
3.3.4 Warning: the “motivation” part of “motivated reasoning / thinking / etc.” is not always what it seems
Motivated reasoning and related phenomena are geared towards “something I want”. But it’s important to remember that “something I want” isn’t necessarily what it seems. For example, people sometimes want to fail (even if they don’t want to want to fail). And if so, motivated reasoning / thinking / etc. still applies as usual, but with the opposite effects that you would otherwise expect.
The classic example of “wanting to fail” is when little kids put on a dramatic and unintentionally-hilarious performance of being unable to do something that they can in fact easily do. (The typical context is that an adult has asked them to do the thing by themselves without help, and then the kid is motivated to demonstrate how that request was unreasonable.) If you haven’t spent much time with little kids, here’s a classic example for your viewing pleasure: “spazzing little girl can’t pick up bowl” (YouTube).
You can also find older kids and adults “wanting to fail”, or wanting other things besides success. But adults tend to hide it better, including from themselves. There’s some nice discussion along those lines in the blog post “Have no excuses” by Nate Soares (2015).
3.3.5 An exception: I think anxious / obsessive “brainstorming” is driven by involuntary attention rather than by valence
An interesting case is anxious / obsessive rumination. For example, think of the stereotypical neurotic parent whose kid didn’t come home by curfew. They’re staring out the window and thinking to themselves, “Why aren’t they here? Maybe they got injured! No, wait—maybe they got arrested! No, wait—maybe they got murdered! No, wait…”.
That’s a kind of brainstorming, I suppose.
Thus, according to the discussion in the previous subsection, it seems that, to be consistent, I would need to say that the parent has a subconscious motivation for their child to be in danger. Do I believe that? Maybe in rare cases, e.g. if the parent had been arguing that letting the kid out was a bad idea and now they have a subconscious perverse ego-dystonic desire to be proven right. But I think the more central case is that their true motivation is exactly what you’d think. After all, their brain is probably systematically searching for ways to ensure their child’s safety, not the other way around.
So is this a counterexample to my claim in §3.3.3 above that “the appeal [positive valence] of the brainstorming destination is the fuel powering the whole brainstorming journey”?
Yes! It is a counterexample. Indeed, I think this case works by a different mechanism, involving involuntary attention, not valence.
Let’s pause for background. I was talking about “attention control” above (§3.3.1). That was referring to voluntary attention. Just as motor outputs can be either voluntary (e.g. talking) or involuntary (e.g. spasms, vasoconstriction, crying, etc.), likewise attentional outputs can be either voluntary or involuntary. The difference (by my definition) is that voluntary attention & motor outputs are determined by the brain’s “main” reinforcement learning system (§1.5.6), involving valence, whereas the involuntary outputs are not. Thus:
An attentional output is voluntary when I pay attention to something because I want to pay attention to it. (Note that this “want” can be ego-dystonic—I don’t necessarily want to want to pay attention to it. This is a common source of confusion, because people have a tendency to “externalize” ego-dystonic desires, i.e. to conceptualize them as involuntary “urges”, even if they are voluntary by my definition. Further discussion here.)
An attentional output is involuntary when it arises for any other reason. A simple example is if you’re paying attention to the fact that your arm is extremely itchy. I think the itch signal goes from peripheral nerves to the brainstem, and from there it directly intervenes in thalamocortical attentional pathways. The reward / reinforcement learning / motivation system need not be involved at all, and indeed these thoughts are demotivating.
Think of attention as a constraint on what thoughts can be thunk[2]. When your arm is very itchy, it can be almost impossible to think any thought that isn’t “about” that itch. If one of those thoughts is negative valence, maybe it will get thrown out (per the usual rule, see §1.3), but it will then immediately get replaced by a new different thought which is also about that itch, because the constraint is still there. Indeed, if the constraint is sufficiently constraining, maybe there’s almost a unique constraint-compatible thought, and then the same negative-valence thought can persist—each time it gets tossed away by its negative valence (§1.3), the cortex goes fishing for a new thought, but just dredges up the same thought as before.
I think anxious / obsessive “brainstorming” is like that. Involuntary attention provides a constraint that makes it almost impossible to think any thought that isn’t “about” that interoceptive sensation of anxiety (and its web of presumed causes and associations). So the “brainstorming” can still happen, even if all the thoughts involved in the brainstorming have negative valence.
3.3.6 How does confirmation bias ever not happen?
If it is really so unnatural to brainstorm why I might be wrong, or otherwise take motor and attentional actions that might lead to changing one’s mind, then how does it ever happen that people search for flaws in their plans, change their strong beliefs, and so on?
(Per the previous subsection, anxious rumination can work, and I think really does work for some people, but that’s not an ideal strategy, for many reasons. Isn’t there any other way?)
I think the two most important ingredients in how confirmation bias can be beaten back in practice are:
Second, you need to pile lots of positive valence on certain metacognitive or self-reflective thoughts like “I’m the kind of person who always brainstorms why my plans might fail”, “It is in my long-term best interest to brainstorm why I might be wrong”, “I have Scout Mindset”, and so on. Try imagining all the positive consequences of having those habits; try hanging around people who hold those metacognitive habits in high regard; lean into curiosity and confusion; etc.
The first one reduces the negative valence of processes that may lead to discovering that you’re wrong, and the second one loads positive valence onto those same processes. So with luck we can tip the balance into positive.
In many domains, these kinds of metacognitive habits develop implicitly and organically by reinforcement learning over the course of normal life experience. For example, most people do not suffer from motivated reasoning about whether or not they remembered to pack an umbrella in their bag. Notice that, in this umbrella example, there is a direct and immediate personal harm from having inaccurate beliefs about the world—if I’m wrong, then I’ll get wet. So there’s abundant opportunity for reinforcement learning to work. By contrast, motivated reasoning tends to be much more common and pernicious in domains where inaccurate beliefs have no direct and immediate personal consequences, e.g. politics. If prediction markets take off more in the future, that could help on the margin, but meanwhile, let’s all use and spread advice like the above bullet points, leaning on the helpful fact that “my beliefs are accurate” is an ego-syntonic aspiration for almost everyone.
3.4 Valence impacts beliefs by acting as salient sense data
As discussed in §3.2, valence is both a control signal able to toss out thoughts and plans that seem bad and cement thoughts and plans that seem good, and a piece of sense data that gets incorporated into our predictive (generative) world-model. The previous section (§3.3) was about how valence affects our beliefs through the former role, and now we’re moving onto the latter role.
As background: contrary to what you may have learned as a kid, humans don’t have “five senses”, but rather dozens of senses, including not just sight, smell, hearing, taste, and touch, but also sense of body configuration (“proprioception”), sense of balance / orientation (involving the vestibular system), sense of pain (“nociception”), sense of hunger, sense of our current physiological arousal, and more. I claim that valence is one of those extra sensory inputs, in the category of “interoception”. Computationally, I think that all of these sensory inputs serve as ground truth for self-supervised learning—i.e., our world-models (“Thought Generator”) continually predict imminent sensory inputs, and are updated when the predictions are wrong (§1.5.4).
3.4.1 When your brain clusters similar things into mental categories / concepts, valence is an important ingredient going into that clustering algorithm
I mentioned that many concepts explicitly incorporate valence into their definitions—think of words like “preferable”, “problematic”, “trouble”, “roadblock”, “pest”, “flourishing”, and so on.
I offered an example made-up anecdote where Person X generally likes (feels positive valence in association with) the concept “religion”, but dislikes Scientology, and then uncoincidentally Person X is particularly liable to say “Scientology isn’t really a religion.” We could say that Person X is “gerrymandering” the concept “religion” to follow the contours of their own valence assessments. (If you don’t like that example, try thinking of your own example, it’s very easy because everybody does this all the time.)
If you think about it mechanistically, those two things are identical! If Person X dislikes a thing, they probably won’t identify it as an example of “flourishing”; and for the exact same underlying reason, if Person X dislikes a thing, they probably won’t identify it as an example of “religion”. In both cases, from that person’s perspective, the thing just doesn’t seem to belong inside that mental category.
Stepping back: When we form mental categories, we’re finding “clusters in thingspace”—stuff that forms a natural grouping in our mental world. Well, valence is part of our mental world too—a first-class piece of sense data, just like appearance, smell, and so on. So our brains naturally treat valence as an ingredient in the categorization algorithm—indeed, as a very important ingredient.
So if someone uses the word “cult” to mean “an ideologically-aligned tight-knit group that I don’t like”, they haven’t intrinsically done anything wrong or confused—that’s no different from how we all use words like “roadblock” or “contamination”. It only becomes confusing and misleading if that person simultaneously insists that “cult” is a word that describes an aspect of the world independently from how we feel about it. And indeed, people make that move all the time! For example, if you look at a random “cult checklist”, you’ll notice that none of the entries are “…And overall the group is bad, booooo”.
Things become even more confusing in cases like “religion”, which may have positive valence in one person’s mind and negative valence in another’s. Then those two people try to talk to each other about “religion”. There’s a real sense in which they are using the same word, but they are not talking about the same concept: In one person’s head, “religion” is a “cluster in thingspace” characterized by positive valence (among other things), and in the other person’s head, it’s a different “cluster in thingspace”, this time characterized by negative valence (among other things). No wonder it often seems like these two people are talking past each other!
3.4.2 Is the above a bug, or a feature?
As I keep mentioning, it’s useful to model the world independently from how we feel about it. Insofar as that’s true, it’s unfortunate that our brains treat valence as sense data that contributes to conceptual categorization and clustering.
On the other hand, at the end of the day, the main reason our brains build world-models in the first place is to make better decisions. And, as discussed in the previous post, valence is how our brains figure out whether a decision is good or bad. So, if our brains are going to do conceptual categorization and clustering to inform decision-making, what on earth could be more important than using valence as a central ingredient in that clustering algorithm?
So at the end of the day, my guess is that there’s a very good evolutionary reason that the brain treats valence as a ubiquitous and salient piece of sense data: without that design feature, we would struggle to make good decisions and get by in the world.
And then it’s an unfortunate but somewhat-inevitable side-effect of that design feature that “modeling the world independently from how I feel about it” is somewhat unnatural for us humans. Likewise, that design feature saddles us with other annoying things like “meaningless arguments” (§2.4.4) and related arguments-over-definitions, miscommunications, etc. Luckily, it’s possible for us humans to mitigate these problems via learned metacognitive heuristics, memes, the scientific method, and so on. We seem to be getting by, more or less.
To spell out some of the issues involved in more detail:
3.4.3 Three roadblocks to talking and reasoning about the world separately from how we feel about it
3.4.3.1 In real life, many discussions are (at least in part) discussions about valence assessments—even if they appear not to be
Finucane et al. studied people’s attitudes towards different technologies (nuclear power, food preservatives, etc.). Their hypothesis was that people’s view of these technologies was largely determined by a general valence / yay-versus-boo / “affect” assessment in their brain, and people then lean on that general assessment when answering questions that nominally should be factual (not affect-based) in nature.
They reported a few findings. First, when they asked people to rate both the “risks” and the “benefits” of each technology, they found that people tended to judge technologies as low-risk & high-benefit, or high-risk & low-benefit, more often than high-high or low-low—especially when the survey-takers were under time pressure. Second, the authors attempted to manipulate people’s beliefs about either risks or benefits by asking them to read a few-sentence blurb arguing that the risks are high, the risks are low, the benefits are high, or the benefits are low. They found that, when the manipulation pulled benefits-assessments up, it tended to pull risk-assessments down, and so on, by and large.
In regards to the first result, Finucane et al. had suggested that, in real life, the risks of a technology should correlate positively, not negatively, with its benefits. For example, if a technology had high risk and low benefit, it wouldn’t be in widespread use in the first place. Since the survey found negative correlation, they suggest, it means people are relying on a heuristic that is leading them astray.
But there’s an alternate spin we can put on it. As has been noted (e.g. 1, 2), many apparent cognitive biases are tied up with conversational implicatures (i.e., communicative content that is mutually understood but not explicitly spoken). Now, people just love talking about valence, and arguing about valence (§2.4.4), and thinking about valence. So there’s a widespread conversational implicature that, whatever a conversation seems to be about, if there’s any possible way to interpret it as actually being about valence, then that’s a decent hypothesis for the speaker’s intended meaning.
This implicature really comes out strongly in “meaningless arguments” (as defined in §2.4.4). If I like Israel, and you dislike Israel, and I say to you “Israel is the freest and most democratic country in the Middle East”, then you’re probably going to assume that I’m also implying “…and therefore you should assign a positive valence to Israel”. And in everyday life, you would almost definitely be correct to assume that that’s what I mean!
Anyway, this widespread conversational implicature applies to survey questions too. So when Finucane et al. offered the survey question “In general, how beneficial do you consider the use of food preservatives to be to U.S. society as a whole?”, probably some survey-takers (especially but not exclusively the survey-takers under time pressure) were at least partly inclined to answer whether, all things considered, food preservatives are good or bad. In other words, perhaps they were tempted to interpret the word “beneficial” in the sense of “net beneficial”, or even just “good”.
Insofar as that’s part of the explanation, the so-called “affect heuristic” is not really a heuristic at all—more like a miscommunication!
So more generally, if I ask you a question, you might interpret my question as at least partly a question about your valence assessments. And as we’ve seen, all of the following are possible:
It’s possible that your interpretation of my question is correct, based on my explicit words—for example, if I had asked “Do you like X?”.
It’s possible that your interpretation of my question is correct, but you were relying not on my explicit words but on a (correctly-understood) conversational implicature—for example, if we’re arguing about Israel as in §2.4.4, and I ask “Israel is free and democratic, right?”, then perhaps we both correctly understand that my implied full question is “Israel is free and democratic, and this fact constitutes a reason to feel more generally positive about Israel, right?”
It’s possible that your interpretation of my question is wrong—i.e., I was trying not to ask about your valence assessments. A likely example of this kind of misinterpretation would be the questions in the Finucane et al. survey, as discussed above.
3.4.3.2 Even if we’re trying to communicate and think about the world independently from how we feel about it, valence tends to become involved, via being tied up with our words and concepts
Let’s say I ask you “Is Ahmed talented?” Maybe you and I both understand the current conversational goal is to make a narrow assessment of Ahmed’s objective skill as a ballet dancer, independently from our general positive or negative feelings about Ahmed. But that’s hard! “Talented” is a “cluster in thingspace” characterized (among other things) by having generally positive valence. Thus, other things equal, negative-valence things will be farther from this “talented” cluster than positive-valence things.
This isn’t an inevitable problem—in some situations, we can wind up learning metacognitive heuristics / strategies to notice and correct for this tendency. As far as I know, we can even wind up overcorrecting, and thus give final answers opposite to the halo effect! (This is one of many reasons that I find psychology studies on the halo effect to be difficult to interpret.)
But in any case, it seems pretty clear to me that valence assessments wind up involved in lots of areas that are “positive” rather than “normative”, and thus where, one might think, valence doesn’t belong.
…Or does it belong after all? That brings us to:
3.4.3.3 Treating valence as evidence about how the world works isn’t necessarily a mistake anyway
In the literature on the halo effect (see especially Murphy et al. 1993), I think it’s well-known that the halo effect is not always a mistake. Lots of positive-valence things tend to correlate with each other, and likewise lots of negative-valence things tend to correlate with each other. People who are talented tend to also be conscientious—not perfectly, but more than chance. And people who are murderers tend to also be dishonest. And if a company makes really great mufflers, then it’s a good bet that they also make good tires. Etc.
And this is no accident—our brains systematically make this happen via how they define concepts (§3.4.1) and edit the valence function (§2.5.1). I’ll spell this out with an example. Suppose murderers tend to wear purple shirts. In that world, everyone would start treating “purple-shirt-wearing” as a pejorative. And then later on, we would look out into the world and notice that the halo effect is valid: people with one negative trait (being a murderer) also tend to have a different negative trait (wearing purple shirts). Is that a coincidence? No! We’re just noticing the same correlation that was responsible for the valence assignments in the first place.
Basically, valence is an axis upon which things can sit, and our brain devotes an extraordinary amount of processing power to relentlessly track where everything in our mental world is on that axis. It doesn’t really matter what that axis corresponds to, in order for those coordinates to be useful in the course of answering factual questions (i.e., positive questions as opposed to normative questions).
For example, suppose I ask you “Is a fleep an example of a floop?” You don’t know what those two words mean, and I’m not going to tell you. But I will tell you that my own brain happens to assign positive valence to fleeps, and negative valence to floops. Well, now I have given you real, bona fide evidence that the correct answer to this factual question is “No, a fleep is not an example of a floop”. It’s certainly not strong evidence, but it is more than zero evidence.
3.4.4 The halo effect and affect heuristic
I’ve already basically said everything that I wanted to say about the halo effect and affect heuristic, but wanted to briefly give them a proper definition and section heading.
First, a note of clarification. I’m using “halo effect” in a broad sense that applies to not only people but also companies, concepts, etc. (But I’ll say much more about people specifically in the next post.) I’m also using “halo effect” in a broad sense that includes both positive and negative judgments (as opposed to using different terms for “halo effect” versus “horn effect”). Given those definitional choices, the term “halo effect” becomes synonymous with “affect heuristic”, as far as I can tell.
With that out of the way, the halo effect / affect heuristic is when we have a generally good or bad impression of a person, group, idea, etc., and use that general impression as a hint when we answer more specific questions about that person, group, idea, etc.
For example, consider the corporation Amazon. Let’s say that Person X generally has a positive impression of that corporation, and that Person Y generally has a negative impression. Now somebody asks them both to guess the probability that Amazon is currently in violation of antitrust regulations. Other things equal, Person X would probably give a lower probability than Person Y. The direction of causation, and exact mechanism, etc. aren’t obvious, but this basic idea seems pretty intuitive to me.
I think the halo effect / affect heuristic comes from some combination of the three phenomena discussed in §3.4.3 above, and I don’t have anything more to say beyond that.
3.5 Will future AI have motivated reasoning, halo effect, etc.?
There’s a school of thought within evolutionary psychology that things like motivated reasoning and the halo effect are particular algorithmic quirks of human brains, installed by Evolution because Evolution doesn’t care about epistemological success, it cares about reproductive success, and sometimes these can come apart. For example (this line of thinking continues), it’s easier to convince someone that your self-serving belief is true if you believe it yourself; therefore evolution built our brains in such a way as to have and preserve self-serving beliefs. This isn’t a crazy idea, and is probably true on the margin, but I hope this post has raised a different possibility: these phenomena are pretty fundamental. Even if you’re coding a future Artificial General Intelligence (AGI) from scratch, my guess is that there’s just no practical way to do it wherein desires won’t impact epistemology, at least to some extent. (Note that “practical” is doing some work here; I think it’s possible to make an AGI wherein desires won’t impact epistemology, but only at the expense of crippling its competence, and/or learning speed, sample efficiency, etc.[3]) See also here.
I sometimes say that if we want future AGIs to have accurate beliefs, we should sign the AGI up for CFAR classes and encourage it to read Scout Mindset and so on, just like the rest of us. That’s partly a joke, but also partly not a joke. The reason it’s partly a joke is that it may be something we wouldn’t need or want to do deliberately. I would expect that a sufficiently smart AI would recognize the usefulness of having accurate beliefs, just as we humans tend to do. And then we wouldn’t need to go out of our way to sign the AGI up for CFAR classes; the AGI would sign itself up, or in the limit of superintelligence, it would reinvent the same ideas from scratch (or better ones). Either way, the AGI would wind up with a giant suite of learned metacognitive habits that patch over all the distortionary side-effects of its cognitive architecture—side-effects like motivated reasoning and the halo effect.
3.6 Conclusion
The previous post discussed how valence is involved in the “normative” magisterium, and this post has discussed how valence is involved in the “positive” magisterium. Taking those together, I think we now have a great foundation for thinking about valence in general. In the next two posts, we will apply that foundation to two particular domains: the social world in the next post, and then mental health in the fifth and final post of the series.
Thanks to Seth Herd, Aysja Johnson, Justis Mills, and Charlie Steiner for critical comments on earlier drafts.
If you disagree with the claim “I can’t just will myself to strongly and sincerely believe that socialists are violent in the absence of any evidence whatsoever”, I bet you’re mentally substituting “actually good evidence” for “evidence”. I agree that people can will themselves to strongly and sincerely believe that socialists are violent on the grounds of very sketchy (so-called) evidence indeed—things like “socialists are bad in other ways” (see §3.4), or “I think I heard someone say that socialists are violent”, etc.
Note for non-native English speakers: “thunk” is a silly / casual past participle of “think”, by analogy with “sunk”, “stunk”, etc. I’m fond of it because e.g. “what thoughts can be thunk” is easier to parse than “what thoughts can be thought”.
Fleshing out this argument, which seems pretty watertight to me: There are two possibilities. Either (A) the AI can apply the full force of its intelligence and motivation to optimize its own thinking, pick out high-information-value information to learn from etc.; or (B) it can’t. In case (A) (e.g. a brain-like-AGI), the AI will have at least some susceptibility with motivated reasoning and related phenomena, per the §3.3 argument. In case (B) (e.g. GPT-4, or AIXItl), I have a hard time believing that such an AI will ever realistically be able to do innovative science etc. at human-level or beyond.
[Valence series] 3. Valence & Beliefs
3.1 Post summary / Table of contents
Part of the Valence series.
So far in the series, we defined valence (Post 1) and talked about how it relates to the “normative” world of desires, values, preferences, and so on (Post 2). Now we move on to the “positive” world of beliefs, expectations, concepts, etc. Here, valence is no longer the sine qua non at the center of everything, as it is in the “normative” magisterium. But it still plays a leading role. Actually, two leading roles!
Section 3.2 distinguishes two paths by which valence affects beliefs: first, in its role as a control signal, and second, in its role as “interoceptive” sense data, which I discuss in turn:
Section 3.3 discusses how valence-as-a-control-signal affects beliefs. This is the domain of motivated reasoning, confirmation bias, and related phenomena. I explain how it works both in general and through a nuts-and-bolts toy model. I also elaborate on “voluntary attention” versus “involuntary attention”, in order to explain anxious rumination, which goes against the normal pattern (it involves thinking about something despite a strong motivation not to think about it).
Section 3.4 discusses how valence-as-interoceptive-sense-data affects beliefs. I argue that, if concepts are “clusters in thingspace”, then valence is one of the axes used by this clustering algorithm. I discuss how this relates to various difficulties in modeling and discussing the world separately from how we feel about it, along with the related “affect heuristic” and “halo effect”.
Section 3.5 briefly muses on whether future AI will have motivated reasoning, halo effect, etc., as we humans do. (My answer is “yes, but maybe it doesn’t matter too much”.)
Section 3.6 is a brief conclusion.
3.2 Two paths for normative to bleed into positive
Here’s a diagram from the previous post:
We have two paths by which valence can impact the world-model (a.k.a. “Thought Generator”): the normative path (upward black arrow) that helps control which thoughts get strengthened versus thrown out, and the positive path (curvy green arrow) that treats valence as one of the input signals to be incorporated into the world model. Corresponding to these two paths, we get two ways for valence to impact factual beliefs:
Motivated reasoning / thinking / observing and confirmation bias—related to the upward black arrow, and discussed in §3.3 below;
The entanglement of valence into our conceptual categories, which makes it difficult to think or talk about the world independently from how we feel about it—related to the curvy green arrow, and discussed in §3.4 below.
Let’s proceed with each in turn!
3.3 Motivated reasoning / thinking / observing, including confirmation bias
Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias—our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.… —Scott Alexander
3.3.1 Attention-control and motor-control provide loopholes through which desires can manipulate beliefs
Wishful thinking—where you believe something because it would be nice if it were true—is generally maladaptive: Imagine spending all day opening your wallet, over and over, expecting each time to find it overflowing with cash. We don’t actually do that, which is an indication that our brains have effective systems to mitigate (albeit not eliminate, as we’ll see) wishful thinking.
How do those mitigations work?
As discussed in Post 1, the brain works by model-based reinforcement learning (RL). Oversimplifying as usual, the “model” (predictive world-model, a.k.a. “Thought Generator”) is trained by self-supervised learning, i.e. predicting immediately-upcoming sensory data and updating on the errors. Meanwhile, the RL part involves using actor-critic RL for decision-making.
This is great! “Self-supervised learning for the predictive world-model” is a fancy way to say:
our beliefs are updated in the direction that increases agreement with our sensory inputs,
…and thus our beliefs are not necessarily updated in the direction that would increase RL reward signals.
Continuing with the previous example, if you look inside your wallet, and it’s not full of cash, then your world-model automatically updates itself to incorporate this fact, even if you wanted that fact to not be true.
But this system does have a loophole: the decisions you make can impact the input data that goes into self-supervised training of the world-model, by influencing what you’re looking at, what you’re paying attention to, what you read, who you talk to, where you are, and so on. And, as machine learning researchers know well, training data plays a big role in determining a trained model’s behavior.
For example, let’s suppose that, for some reason, I want to strongly and sincerely believe that socialists tend to be violent. Now, I can’t just will myself to strongly and sincerely believe that socialists are violent, in the absence of any evidence whatsoever.[1] But what I can do is:
Use my motor-control levers to search the internet for news stories about violent socialists;
Use my motor- and attention-control levers to read the top 50 stories that show up in that search;
Use my attention-control levers to avoid thinking too hard about the heinous crime against epistemology that I am performing right now;
…And then my world-model will take it from there! Thanks to self-supervised learning from all this input data, my world-model will grow a strong and sincere belief that socialists tend to be violent.
3.3.2 Motivated reasoning and confirmation bias
Let’s assume I currently believe X, and let’s also assume that the idea of “myself doubting X” is demotivating / unappealing (negative valence).
…And by the way, this demotivating-ness is quite common! Maybe doubting X would cause my friends to disrespect me, or maybe I don’t like being confused and/or putting in the effort to rebuild a new understanding of everything related to X, or maybe I think of myself as knowledgeable and would find it unpleasant to see myself-in-hindsight as having been wrong about X, and so on. (More on this in §3.3.6 below.)
Regardless of the reason, this situation tends to lead to motivated reasoning and confirmation bias. We have a lifetime of experience with which to learn the metacognitive pattern “If I read a good argument against X, and if I really try in good faith to make sense of it and chew over it, then there’s a decent chance that I’ll start doubting X”. And the latter part of that thought is demotivating! And since we’re good (too good!) at long-term planning and means-end reasoning, we will find that the idea of reading and deeply engaging with the good argument against X is demotivating from the start, so we probably won’t do it.
That latter demotivating-ness might well be ego-dystonic. Maybe I want to want to deeply engage with good arguments against X. But, ego-dystonic or not, the demotivating-ness is still there, and still contributing to my motor-control and attention-control outputs.
Now I’ll jump into a more detailed model of one possible way this phenomenon can unfold:
3.3.3 A nuts-and-bolts model: It generally feels easy and natural to brainstorm / figure out how something might happen, when you want it to happen. Conversely, it generally feels hard and unnatural to figure out how something might happen, when you want it to not happen
Let’s say I want to crack open a coconut and eat the meat inside, but I don’t know how. So I brainstorm.
Brainstorming—now what does that entail, exactly? Well, I’m ultimately trying to find a plan, in my mind’s eye, that looks like: “[fill in the blank] and then the coconut will be open”, and where the plan seems plausible to me (i.e., compatible with my world-model). As soon as I can imagine such a plan, I can then try executing it, with the expectation that it might work.
In the simplest case, my brain will kinda grasp onto the end part of the plan “…and then the coconut is open!”, and keep that piece-of-a-thought active throughout this process. Then my brain will go rummaging around for possible beginning parts of the plan such that the whole plan, altogether, forms a plausible story.
The whole process may take a while. But there’s a subtlety making it work: The thoughts that I have mid-brainstorming are all positive valence, because the brainstorming target (“…and then the coconut will be open!”) is itself positive valence—it’s something that I want to happen. And the brainstorming target is active in my brain throughout this brainstorming process, which by linearity (§2.4.1) adds a heap of positive valence to all the thoughts constituting the brainstorming process.
And if all the thoughts I have during brainstorming are positive valence, then I’m likely to carry on brainstorming, rather than flinching away from the brainstorming process as soon as possible. That follows from the most basic properties of valence—see §1.3.
In other words, the appeal of the brainstorming destination is the fuel powering the whole brainstorming process.
The flip side of this coin is that it feels demotivating to brainstorm how something might happen, when you want it to not happen.
Let’s take red-teaming as an example. Suppose that I’m designing a widget, and I decide to brainstorm reasons why, if we build the widget following this design, it might fail.
Just like the above, the brainstorming process will involve keeping the idea “…and then the widget will fail!” active in my mind, and rummaging around for plausible stories whose ending looks like that.
Unfortunately, “…and then the widget will fail!” is a highly demotivating piece-of-a-thought. (If my widget design has a flaw, then it’s embarrassing, and I have to do a ton of work to fix it!) So again by linearity (§2.4.1), the whole brainstorming process is going to inherit that negative valence. And thus, by the basic properties of valence (§1.3), I’m likely to flinch away from doing this brainstorming process in the first place. Or if I do manage to begin it, I’ll be inclined to quit at the slightest excuse.
So most people, most of the time, don’t instinctively red-team their plans. It doesn’t come naturally. People literally offer training courses. The reason it’s called a “red team” is that it’s often a different team of people, who (in their role) want the plan to fail, and thus have valence on their side.
(See also: Positive values seem more robust and lasting than prohibitions; Motivated Stopping and Motivated Continuation.)
3.3.4 Warning: the “motivation” part of “motivated reasoning / thinking / etc.” is not always what it seems
Motivated reasoning and related phenomena are geared towards “something I want”. But it’s important to remember that “something I want” isn’t necessarily what it seems. For example, people sometimes want to fail (even if they don’t want to want to fail). And if so, motivated reasoning / thinking / etc. still applies as usual, but with the opposite effects that you would otherwise expect.
The classic example of “wanting to fail” is when little kids put on a dramatic and unintentionally-hilarious performance of being unable to do something that they can in fact easily do. (The typical context is that an adult has asked them to do the thing by themselves without help, and then the kid is motivated to demonstrate how that request was unreasonable.) If you haven’t spent much time with little kids, here’s a classic example for your viewing pleasure: “spazzing little girl can’t pick up bowl” (YouTube).
You can also find older kids and adults “wanting to fail”, or wanting other things besides success. But adults tend to hide it better, including from themselves. There’s some nice discussion along those lines in the blog post “Have no excuses” by Nate Soares (2015).
3.3.5 An exception: I think anxious / obsessive “brainstorming” is driven by involuntary attention rather than by valence
An interesting case is anxious / obsessive rumination. For example, think of the stereotypical neurotic parent whose kid didn’t come home by curfew. They’re staring out the window and thinking to themselves, “Why aren’t they here? Maybe they got injured! No, wait—maybe they got arrested! No, wait—maybe they got murdered! No, wait…”.
That’s a kind of brainstorming, I suppose.
Thus, according to the discussion in the previous subsection, it seems that, to be consistent, I would need to say that the parent has a subconscious motivation for their child to be in danger. Do I believe that? Maybe in rare cases, e.g. if the parent had been arguing that letting the kid out was a bad idea and now they have a subconscious perverse ego-dystonic desire to be proven right. But I think the more central case is that their true motivation is exactly what you’d think. After all, their brain is probably systematically searching for ways to ensure their child’s safety, not the other way around.
So is this a counterexample to my claim in §3.3.3 above that “the appeal [positive valence] of the brainstorming destination is the fuel powering the whole brainstorming journey”?
Yes! It is a counterexample. Indeed, I think this case works by a different mechanism, involving involuntary attention, not valence.
Let’s pause for background. I was talking about “attention control” above (§3.3.1). That was referring to voluntary attention. Just as motor outputs can be either voluntary (e.g. talking) or involuntary (e.g. spasms, vasoconstriction, crying, etc.), likewise attentional outputs can be either voluntary or involuntary. The difference (by my definition) is that voluntary attention & motor outputs are determined by the brain’s “main” reinforcement learning system (§1.5.6), involving valence, whereas the involuntary outputs are not. Thus:
An attentional output is voluntary when I pay attention to something because I want to pay attention to it. (Note that this “want” can be ego-dystonic—I don’t necessarily want to want to pay attention to it. This is a common source of confusion, because people have a tendency to “externalize” ego-dystonic desires, i.e. to conceptualize them as involuntary “urges”, even if they are voluntary by my definition. Further discussion here.)
An attentional output is involuntary when it arises for any other reason. A simple example is if you’re paying attention to the fact that your arm is extremely itchy. I think the itch signal goes from peripheral nerves to the brainstem, and from there it directly intervenes in thalamocortical attentional pathways. The reward / reinforcement learning / motivation system need not be involved at all, and indeed these thoughts are demotivating.
Think of attention as a constraint on what thoughts can be thunk[2]. When your arm is very itchy, it can be almost impossible to think any thought that isn’t “about” that itch. If one of those thoughts is negative valence, maybe it will get thrown out (per the usual rule, see §1.3), but it will then immediately get replaced by a new different thought which is also about that itch, because the constraint is still there. Indeed, if the constraint is sufficiently constraining, maybe there’s almost a unique constraint-compatible thought, and then the same negative-valence thought can persist—each time it gets tossed away by its negative valence (§1.3), the cortex goes fishing for a new thought, but just dredges up the same thought as before.
I think anxious / obsessive “brainstorming” is like that. Involuntary attention provides a constraint that makes it almost impossible to think any thought that isn’t “about” that interoceptive sensation of anxiety (and its web of presumed causes and associations). So the “brainstorming” can still happen, even if all the thoughts involved in the brainstorming have negative valence.
3.3.6 How does confirmation bias ever not happen?
If it is really so unnatural to brainstorm why I might be wrong, or otherwise take motor and attentional actions that might lead to changing one’s mind, then how does it ever happen that people search for flaws in their plans, change their strong beliefs, and so on?
(Per the previous subsection, anxious rumination can work, and I think really does work for some people, but that’s not an ideal strategy, for many reasons. Isn’t there any other way?)
I think the two most important ingredients in how confirmation bias can be beaten back in practice are:
First, you need to minimize the amount of negative valence on the idea of your plans and ideas and beliefs being wrong: you can “keep your identity small”, and recite the “Litany of Tarski”, and “leave yourself a line of retreat”, and so on.
Second, you need to pile lots of positive valence on certain metacognitive or self-reflective thoughts like “I’m the kind of person who always brainstorms why my plans might fail”, “It is in my long-term best interest to brainstorm why I might be wrong”, “I have Scout Mindset”, and so on. Try imagining all the positive consequences of having those habits; try hanging around people who hold those metacognitive habits in high regard; lean into curiosity and confusion; etc.
The first one reduces the negative valence of processes that may lead to discovering that you’re wrong, and the second one loads positive valence onto those same processes. So with luck we can tip the balance into positive.
In many domains, these kinds of metacognitive habits develop implicitly and organically by reinforcement learning over the course of normal life experience. For example, most people do not suffer from motivated reasoning about whether or not they remembered to pack an umbrella in their bag. Notice that, in this umbrella example, there is a direct and immediate personal harm from having inaccurate beliefs about the world—if I’m wrong, then I’ll get wet. So there’s abundant opportunity for reinforcement learning to work. By contrast, motivated reasoning tends to be much more common and pernicious in domains where inaccurate beliefs have no direct and immediate personal consequences, e.g. politics. If prediction markets take off more in the future, that could help on the margin, but meanwhile, let’s all use and spread advice like the above bullet points, leaning on the helpful fact that “my beliefs are accurate” is an ego-syntonic aspiration for almost everyone.
3.4 Valence impacts beliefs by acting as salient sense data
As discussed in §3.2, valence is both a control signal able to toss out thoughts and plans that seem bad and cement thoughts and plans that seem good, and a piece of sense data that gets incorporated into our predictive (generative) world-model. The previous section (§3.3) was about how valence affects our beliefs through the former role, and now we’re moving onto the latter role.
As background: contrary to what you may have learned as a kid, humans don’t have “five senses”, but rather dozens of senses, including not just sight, smell, hearing, taste, and touch, but also sense of body configuration (“proprioception”), sense of balance / orientation (involving the vestibular system), sense of pain (“nociception”), sense of hunger, sense of our current physiological arousal, and more. I claim that valence is one of those extra sensory inputs, in the category of “interoception”. Computationally, I think that all of these sensory inputs serve as ground truth for self-supervised learning—i.e., our world-models (“Thought Generator”) continually predict imminent sensory inputs, and are updated when the predictions are wrong (§1.5.4).
3.4.1 When your brain clusters similar things into mental categories / concepts, valence is an important ingredient going into that clustering algorithm
Here are two things that I mentioned in §2.4.5:
I mentioned that many concepts explicitly incorporate valence into their definitions—think of words like “preferable”, “problematic”, “trouble”, “roadblock”, “pest”, “flourishing”, and so on.
I offered an example made-up anecdote where Person X generally likes (feels positive valence in association with) the concept “religion”, but dislikes Scientology, and then uncoincidentally Person X is particularly liable to say “Scientology isn’t really a religion.” We could say that Person X is “gerrymandering” the concept “religion” to follow the contours of their own valence assessments. (If you don’t like that example, try thinking of your own example, it’s very easy because everybody does this all the time.)
If you think about it mechanistically, those two things are identical! If Person X dislikes a thing, they probably won’t identify it as an example of “flourishing”; and for the exact same underlying reason, if Person X dislikes a thing, they probably won’t identify it as an example of “religion”. In both cases, from that person’s perspective, the thing just doesn’t seem to belong inside that mental category.
Stepping back: When we form mental categories, we’re finding “clusters in thingspace”—stuff that forms a natural grouping in our mental world. Well, valence is part of our mental world too—a first-class piece of sense data, just like appearance, smell, and so on. So our brains naturally treat valence as an ingredient in the categorization algorithm—indeed, as a very important ingredient.
So if someone uses the word “cult” to mean “an ideologically-aligned tight-knit group that I don’t like”, they haven’t intrinsically done anything wrong or confused—that’s no different from how we all use words like “roadblock” or “contamination”. It only becomes confusing and misleading if that person simultaneously insists that “cult” is a word that describes an aspect of the world independently from how we feel about it. And indeed, people make that move all the time! For example, if you look at a random “cult checklist”, you’ll notice that none of the entries are “…And overall the group is bad, booooo”.
Things become even more confusing in cases like “religion”, which may have positive valence in one person’s mind and negative valence in another’s. Then those two people try to talk to each other about “religion”. There’s a real sense in which they are using the same word, but they are not talking about the same concept: In one person’s head, “religion” is a “cluster in thingspace” characterized by positive valence (among other things), and in the other person’s head, it’s a different “cluster in thingspace”, this time characterized by negative valence (among other things). No wonder it often seems like these two people are talking past each other!
3.4.2 Is the above a bug, or a feature?
As I keep mentioning, it’s useful to model the world independently from how we feel about it. Insofar as that’s true, it’s unfortunate that our brains treat valence as sense data that contributes to conceptual categorization and clustering.
On the other hand, at the end of the day, the main reason our brains build world-models in the first place is to make better decisions. And, as discussed in the previous post, valence is how our brains figure out whether a decision is good or bad. So, if our brains are going to do conceptual categorization and clustering to inform decision-making, what on earth could be more important than using valence as a central ingredient in that clustering algorithm?
So at the end of the day, my guess is that there’s a very good evolutionary reason that the brain treats valence as a ubiquitous and salient piece of sense data: without that design feature, we would struggle to make good decisions and get by in the world.
And then it’s an unfortunate but somewhat-inevitable side-effect of that design feature that “modeling the world independently from how I feel about it” is somewhat unnatural for us humans. Likewise, that design feature saddles us with other annoying things like “meaningless arguments” (§2.4.4) and related arguments-over-definitions, miscommunications, etc. Luckily, it’s possible for us humans to mitigate these problems via learned metacognitive heuristics, memes, the scientific method, and so on. We seem to be getting by, more or less.
To spell out some of the issues involved in more detail:
3.4.3 Three roadblocks to talking and reasoning about the world separately from how we feel about it
3.4.3.1 In real life, many discussions are (at least in part) discussions about valence assessments—even if they appear not to be
As an example of people who I claim are confused about this issue, take the classic study “The affect heuristic in judgments of risks and benefits” (Finucane et al., 2000). (Yes it did replicate; so I’m trusting their results and instead nitpicking about their interpretation.)
Finucane et al. studied people’s attitudes towards different technologies (nuclear power, food preservatives, etc.). Their hypothesis was that people’s view of these technologies was largely determined by a general valence / yay-versus-boo / “affect” assessment in their brain, and people then lean on that general assessment when answering questions that nominally should be factual (not affect-based) in nature.
They reported a few findings. First, when they asked people to rate both the “risks” and the “benefits” of each technology, they found that people tended to judge technologies as low-risk & high-benefit, or high-risk & low-benefit, more often than high-high or low-low—especially when the survey-takers were under time pressure. Second, the authors attempted to manipulate people’s beliefs about either risks or benefits by asking them to read a few-sentence blurb arguing that the risks are high, the risks are low, the benefits are high, or the benefits are low. They found that, when the manipulation pulled benefits-assessments up, it tended to pull risk-assessments down, and so on, by and large.
In regards to the first result, Finucane et al. had suggested that, in real life, the risks of a technology should correlate positively, not negatively, with its benefits. For example, if a technology had high risk and low benefit, it wouldn’t be in widespread use in the first place. Since the survey found negative correlation, they suggest, it means people are relying on a heuristic that is leading them astray.
But there’s an alternate spin we can put on it. As has been noted (e.g. 1, 2), many apparent cognitive biases are tied up with conversational implicatures (i.e., communicative content that is mutually understood but not explicitly spoken). Now, people just love talking about valence, and arguing about valence (§2.4.4), and thinking about valence. So there’s a widespread conversational implicature that, whatever a conversation seems to be about, if there’s any possible way to interpret it as actually being about valence, then that’s a decent hypothesis for the speaker’s intended meaning.
This implicature really comes out strongly in “meaningless arguments” (as defined in §2.4.4). If I like Israel, and you dislike Israel, and I say to you “Israel is the freest and most democratic country in the Middle East”, then you’re probably going to assume that I’m also implying “…and therefore you should assign a positive valence to Israel”. And in everyday life, you would almost definitely be correct to assume that that’s what I mean!
Anyway, this widespread conversational implicature applies to survey questions too. So when Finucane et al. offered the survey question “In general, how beneficial do you consider the use of food preservatives to be to U.S. society as a whole?”, probably some survey-takers (especially but not exclusively the survey-takers under time pressure) were at least partly inclined to answer whether, all things considered, food preservatives are good or bad. In other words, perhaps they were tempted to interpret the word “beneficial” in the sense of “net beneficial”, or even just “good”.
Insofar as that’s part of the explanation, the so-called “affect heuristic” is not really a heuristic at all—more like a miscommunication!
So more generally, if I ask you a question, you might interpret my question as at least partly a question about your valence assessments. And as we’ve seen, all of the following are possible:
It’s possible that your interpretation of my question is correct, based on my explicit words—for example, if I had asked “Do you like X?”.
It’s possible that your interpretation of my question is correct, but you were relying not on my explicit words but on a (correctly-understood) conversational implicature—for example, if we’re arguing about Israel as in §2.4.4, and I ask “Israel is free and democratic, right?”, then perhaps we both correctly understand that my implied full question is “Israel is free and democratic, and this fact constitutes a reason to feel more generally positive about Israel, right?”
It’s possible that your interpretation of my question is wrong—i.e., I was trying not to ask about your valence assessments. A likely example of this kind of misinterpretation would be the questions in the Finucane et al. survey, as discussed above.
3.4.3.2 Even if we’re trying to communicate and think about the world independently from how we feel about it, valence tends to become involved, via being tied up with our words and concepts
Let’s say I ask you “Is Ahmed talented?” Maybe you and I both understand the current conversational goal is to make a narrow assessment of Ahmed’s objective skill as a ballet dancer, independently from our general positive or negative feelings about Ahmed. But that’s hard! “Talented” is a “cluster in thingspace” characterized (among other things) by having generally positive valence. Thus, other things equal, negative-valence things will be farther from this “talented” cluster than positive-valence things.
This isn’t an inevitable problem—in some situations, we can wind up learning metacognitive heuristics / strategies to notice and correct for this tendency. As far as I know, we can even wind up overcorrecting, and thus give final answers opposite to the halo effect! (This is one of many reasons that I find psychology studies on the halo effect to be difficult to interpret.)
But in any case, it seems pretty clear to me that valence assessments wind up involved in lots of areas that are “positive” rather than “normative”, and thus where, one might think, valence doesn’t belong.
…Or does it belong after all? That brings us to:
3.4.3.3 Treating valence as evidence about how the world works isn’t necessarily a mistake anyway
In the literature on the halo effect (see especially Murphy et al. 1993), I think it’s well-known that the halo effect is not always a mistake. Lots of positive-valence things tend to correlate with each other, and likewise lots of negative-valence things tend to correlate with each other. People who are talented tend to also be conscientious—not perfectly, but more than chance. And people who are murderers tend to also be dishonest. And if a company makes really great mufflers, then it’s a good bet that they also make good tires. Etc.
And this is no accident—our brains systematically make this happen via how they define concepts (§3.4.1) and edit the valence function (§2.5.1). I’ll spell this out with an example. Suppose murderers tend to wear purple shirts. In that world, everyone would start treating “purple-shirt-wearing” as a pejorative. And then later on, we would look out into the world and notice that the halo effect is valid: people with one negative trait (being a murderer) also tend to have a different negative trait (wearing purple shirts). Is that a coincidence? No! We’re just noticing the same correlation that was responsible for the valence assignments in the first place.
Basically, valence is an axis upon which things can sit, and our brain devotes an extraordinary amount of processing power to relentlessly track where everything in our mental world is on that axis. It doesn’t really matter what that axis corresponds to, in order for those coordinates to be useful in the course of answering factual questions (i.e., positive questions as opposed to normative questions).
For example, suppose I ask you “Is a fleep an example of a floop?” You don’t know what those two words mean, and I’m not going to tell you. But I will tell you that my own brain happens to assign positive valence to fleeps, and negative valence to floops. Well, now I have given you real, bona fide evidence that the correct answer to this factual question is “No, a fleep is not an example of a floop”. It’s certainly not strong evidence, but it is more than zero evidence.
3.4.4 The halo effect and affect heuristic
I’ve already basically said everything that I wanted to say about the halo effect and affect heuristic, but wanted to briefly give them a proper definition and section heading.
First, a note of clarification. I’m using “halo effect” in a broad sense that applies to not only people but also companies, concepts, etc. (But I’ll say much more about people specifically in the next post.) I’m also using “halo effect” in a broad sense that includes both positive and negative judgments (as opposed to using different terms for “halo effect” versus “horn effect”). Given those definitional choices, the term “halo effect” becomes synonymous with “affect heuristic”, as far as I can tell.
With that out of the way, the halo effect / affect heuristic is when we have a generally good or bad impression of a person, group, idea, etc., and use that general impression as a hint when we answer more specific questions about that person, group, idea, etc.
For example, consider the corporation Amazon. Let’s say that Person X generally has a positive impression of that corporation, and that Person Y generally has a negative impression. Now somebody asks them both to guess the probability that Amazon is currently in violation of antitrust regulations. Other things equal, Person X would probably give a lower probability than Person Y. The direction of causation, and exact mechanism, etc. aren’t obvious, but this basic idea seems pretty intuitive to me.
I think the halo effect / affect heuristic comes from some combination of the three phenomena discussed in §3.4.3 above, and I don’t have anything more to say beyond that.
3.5 Will future AI have motivated reasoning, halo effect, etc.?
There’s a school of thought within evolutionary psychology that things like motivated reasoning and the halo effect are particular algorithmic quirks of human brains, installed by Evolution because Evolution doesn’t care about epistemological success, it cares about reproductive success, and sometimes these can come apart. For example (this line of thinking continues), it’s easier to convince someone that your self-serving belief is true if you believe it yourself; therefore evolution built our brains in such a way as to have and preserve self-serving beliefs. This isn’t a crazy idea, and is probably true on the margin, but I hope this post has raised a different possibility: these phenomena are pretty fundamental. Even if you’re coding a future Artificial General Intelligence (AGI) from scratch, my guess is that there’s just no practical way to do it wherein desires won’t impact epistemology, at least to some extent. (Note that “practical” is doing some work here; I think it’s possible to make an AGI wherein desires won’t impact epistemology, but only at the expense of crippling its competence, and/or learning speed, sample efficiency, etc.[3]) See also here.
I sometimes say that if we want future AGIs to have accurate beliefs, we should sign the AGI up for CFAR classes and encourage it to read Scout Mindset and so on, just like the rest of us. That’s partly a joke, but also partly not a joke. The reason it’s partly a joke is that it may be something we wouldn’t need or want to do deliberately. I would expect that a sufficiently smart AI would recognize the usefulness of having accurate beliefs, just as we humans tend to do. And then we wouldn’t need to go out of our way to sign the AGI up for CFAR classes; the AGI would sign itself up, or in the limit of superintelligence, it would reinvent the same ideas from scratch (or better ones). Either way, the AGI would wind up with a giant suite of learned metacognitive habits that patch over all the distortionary side-effects of its cognitive architecture—side-effects like motivated reasoning and the halo effect.
3.6 Conclusion
The previous post discussed how valence is involved in the “normative” magisterium, and this post has discussed how valence is involved in the “positive” magisterium. Taking those together, I think we now have a great foundation for thinking about valence in general. In the next two posts, we will apply that foundation to two particular domains: the social world in the next post, and then mental health in the fifth and final post of the series.
Thanks to Seth Herd, Aysja Johnson, Justis Mills, and Charlie Steiner for critical comments on earlier drafts.
If you disagree with the claim “I can’t just will myself to strongly and sincerely believe that socialists are violent in the absence of any evidence whatsoever”, I bet you’re mentally substituting “actually good evidence” for “evidence”. I agree that people can will themselves to strongly and sincerely believe that socialists are violent on the grounds of very sketchy (so-called) evidence indeed—things like “socialists are bad in other ways” (see §3.4), or “I think I heard someone say that socialists are violent”, etc.
Note for non-native English speakers: “thunk” is a silly / casual past participle of “think”, by analogy with “sunk”, “stunk”, etc. I’m fond of it because e.g. “what thoughts can be thunk” is easier to parse than “what thoughts can be thought”.
Fleshing out this argument, which seems pretty watertight to me: There are two possibilities. Either (A) the AI can apply the full force of its intelligence and motivation to optimize its own thinking, pick out high-information-value information to learn from etc.; or (B) it can’t. In case (A) (e.g. a brain-like-AGI), the AI will have at least some susceptibility with motivated reasoning and related phenomena, per the §3.3 argument. In case (B) (e.g. GPT-4, or AIXItl), I have a hard time believing that such an AI will ever realistically be able to do innovative science etc. at human-level or beyond.