I think you’re on to something important here. I’ve spent a bunch of time researching cognitive biases (it was my job focus for four years and parts of more), and I settled on the idea that polarization among viewpoints was the biggest practical impact of cognitive biases. Confirmation bias captured some of it, but another part would fall under motivated reasoning. But that’s only a partial description. Motivation could be for many things. In many of the most important cases, I think the motivation could be described as aggression, as you classify it. The motivation is to defeat a person, viewpoint, or idea you dislike and feel enemnity toward.
I think it’s important to recognize this in our own cognition. It’s a bias. It creates confirmation bias and motivated reasoning to believe whatever will oppose the person/viewpoint/idea we don’t like.
But it can be differently motivated. It’s also about wanting to get at the truth, and having a liking for doing that by supposing the opposite of whatever you’re hearing, then looking for arguments and evidence for it. I think we can harness that instinct/habit/bias for better rationality. When directed at our own thinking, this can be a really effective way to combat confirmation bias. And when directed at our collaborators, but edited to preserve good social relations, it can be really good for a community’s epistemics.
If we don’t do that careful editing, it brings out more of the same bias in our conversational partners. And we get arguments and conflct instead of discussion.
I think you’re on to something important here. I’ve spent a bunch of time researching cognitive biases (it was my job focus for four years and parts of more), and I settled on the idea that polarization among viewpoints was the biggest practical impact of cognitive biases. Confirmation bias captured some of it, but another part would fall under motivated reasoning. But that’s only a partial description. Motivation could be for many things. In many of the most important cases, I think the motivation could be described as aggression, as you classify it. The motivation is to defeat a person, viewpoint, or idea you dislike and feel enemnity toward.
I think it’s important to recognize this in our own cognition. It’s a bias. It creates confirmation bias and motivated reasoning to believe whatever will oppose the person/viewpoint/idea we don’t like.
But it can be differently motivated. It’s also about wanting to get at the truth, and having a liking for doing that by supposing the opposite of whatever you’re hearing, then looking for arguments and evidence for it. I think we can harness that instinct/habit/bias for better rationality. When directed at our own thinking, this can be a really effective way to combat confirmation bias. And when directed at our collaborators, but edited to preserve good social relations, it can be really good for a community’s epistemics.
If we don’t do that careful editing, it brings out more of the same bias in our conversational partners. And we get arguments and conflct instead of discussion.