I think most of these can be viewed as separate types of motivated reasoning. People believe what sounds good to believe. That’s an evaluation of both the logic, if they’ve thought about it, and the value of that belief for that particular person. Belief formation involves a series of decisions, and those decisions are made by reinforcement learning mechanisms involving the dopamine system influencing related brain systems.
The definition of motivated reasoning (MR) overlaps with that of confirmation bias. I think it is the reason confirmation bias is so strong. Scott Alexander has talked about this a good deal; he says
Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias—our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds. This is the bias that explains why your political opponents continue to be your political opponents, instead of converting to your obviously superior beliefs. And so on to religion, pseudoscience, and all the other scourges of the intellectual world. (source)
So, to apply the motivated reasoning lens to your categories:
Self-interest: obviously motivated reasoning.
Denial: This one seemed pretty vague, not sure MR fits there
Social pressure: bad consequences of people you like not liking you
People don’t have images: This wouldn’t be MR, but a different problem with thinking about the issue hard enough.
Marginalization of AI doomers: this seems like the most important one. I’d call it disliking AI doomers. You don’t want to believe something you associate with a person or group you emotionally dislike. It just feels bad to do it.
There’s more explanation and exploration to be done, but I haven’t gotten to it yet, so I’m putting my brief thoughts here, since you’re addressing this important topic. I want to write a post called something like “people don’t believe in x-risk because rationality isn’t rational”, exploring this topic. Motivated reasoning makes or thinking locally optimal for our survival (one meaning of rational), at the cost of being logically wrong in some out-of-distribution cases of pure logical induction of the truth (another meaning of rational).
I agree with your sentiment that most of this is influenced by motivated reasoning.
I would add that “Joep” in the Denial story is motivated by cognitive dissonance, or rather the attempt to reduce cognitive dissonance by discarding one of the two ideas “x-risk is real and gives me anxiety” and “I don’t want to feel anxiety”.
In the People Don’t Have Images story, “Dario” is likely influenced by the availability heuristic, where he is attempting to estimate the likelihood of a future event based on how easily he can recall similar past events.
I think most of these can be viewed as separate types of motivated reasoning. People believe what sounds good to believe. That’s an evaluation of both the logic, if they’ve thought about it, and the value of that belief for that particular person. Belief formation involves a series of decisions, and those decisions are made by reinforcement learning mechanisms involving the dopamine system influencing related brain systems.
The definition of motivated reasoning (MR) overlaps with that of confirmation bias. I think it is the reason confirmation bias is so strong. Scott Alexander has talked about this a good deal; he says
So, to apply the motivated reasoning lens to your categories:
Self-interest: obviously motivated reasoning.
Denial: This one seemed pretty vague, not sure MR fits there
Social pressure: bad consequences of people you like not liking you
People don’t have images: This wouldn’t be MR, but a different problem with thinking about the issue hard enough.
Marginalization of AI doomers: this seems like the most important one. I’d call it disliking AI doomers. You don’t want to believe something you associate with a person or group you emotionally dislike. It just feels bad to do it.
The last one seems like the most important. And it’s also the loosest connection to motivated reasoning. You need to wrap in the halo/horns effect. This is what Scott Alexander calls The noncentral fallacy—the worst argument in the world? or undefined arguments.
There’s more explanation and exploration to be done, but I haven’t gotten to it yet, so I’m putting my brief thoughts here, since you’re addressing this important topic. I want to write a post called something like “people don’t believe in x-risk because rationality isn’t rational”, exploring this topic. Motivated reasoning makes or thinking locally optimal for our survival (one meaning of rational), at the cost of being logically wrong in some out-of-distribution cases of pure logical induction of the truth (another meaning of rational).
I agree with your sentiment that most of this is influenced by motivated reasoning.
I would add that “Joep” in the Denial story is motivated by cognitive dissonance, or rather the attempt to reduce cognitive dissonance by discarding one of the two ideas “x-risk is real and gives me anxiety” and “I don’t want to feel anxiety”.
In the People Don’t Have Images story, “Dario” is likely influenced by the availability heuristic, where he is attempting to estimate the likelihood of a future event based on how easily he can recall similar past events.
Thanks for your thoughtful answer. It’s interesting how I just describe my observations, and people make conclusions out of it that I didn’t think of