If we can get good enough models of however the scissors-statements actually work, we might be able to help more people be more in touch with the common humanity of both halves of the country, and more able to heal blind spots.
E.g., if the above model is right, maybe we could tell at least some people “try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X—and that you’re right about the problems with Y, but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X.”
We can build a useful genre-savviness about common/destructive meme patterns and how to counter them, maybe. LessWrong is sort of well-positioned to be a leader there: we have analytic strength, and aren’t too politically mindkilled.
I think this idea is worth exploring. The first bit seems pretty easy to convey and get people to listen to:
“try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X—and that you’re right about the problems with Y...
But the second bit
… but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X.”
sounds like a very bitter pill to swallow, and therefore hard to get people to listen to.
I think motivated reasoning effects turn our attention quickly away from ideas we think are “bad” on an emotional level. These might thought of as low-level ugh fields around those concepts. Steve Byrnes excellent work on valence in the brain and mind can be read as an explanation for motivated reasoning and resulting polarization, and I highly recommend doing so. I had reached essentially identical conclusions after some years of studying cognitive biases from the perspective of brain mechanisms, but I haven’t yet gotten around to the substantial task of writing it up well enough to be useful. I think it’s by far the most important cognitive bias. Scott Alexander says in his review of the scout mindset:
Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias...
I think this is half right; motivated reasoning overlaps highly with confirmation bias, since most (all) of the things we believe currently are things we think are good to believe. But it’s subtly different, particularly when we think about how to work around it, either in our own minds or when communicating with others.
For instance, deep canvassing appears to sidestep motivated reasoning by focusing on a personal connection, and to actually work to change minds on political issues (at least acceptance of LGBTQ issues), and according to scant available data it seems like the best known method for actually changing beliefs. It works on an emotional level, presenting no arguments, just a pleasant conversation with someone from a group. It lets people do the work of changing their own minds—as an honest, rational approach should. The specifics of deep canvassing might be limited to opinions about groups, but its success might be a guide to developing other approaches. Not directly asking someone to consider adopting a belief they dislike on an instinctive/unconscious level seems like a sensible starting point.
Applying that to your specific proposal: Perhaps something more like “Y-voters are not so much in favor of Y, as against X … You probably agree that X can be a problem; they’re just estimating it as way worse than you are. Here are some things they’re worried about. Maybe they’re wrong that those things could easily happen, but you can see why they’d want to prevent those things from happening.”
This might work for some people, since they don’t actually like the possible consequences, and don’t have strong beliefs about the abstract or complex theories of how those very bad outcomes might come to pass.
That might still set of emotional/valence alarms if it brings up the concept of giving ground to ones’ opponents.
Anyway, I think it’s possible to create useful political/cognitive discourse if it’s done carefully and with an understanding of the psychological forces involved. I’d be interested in being involved if some LWers want to workshop ideas along these lines.
If we can get good enough models of however the scissors-statements actually work, we might be able to help more people be more in touch with the common humanity of both halves of the country, and more able to heal blind spots.
E.g., if the above model is right, maybe we could tell at least some people “try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X—and that you’re right about the problems with Y, but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X.”
We can build a useful genre-savviness about common/destructive meme patterns and how to counter them, maybe. LessWrong is sort of well-positioned to be a leader there: we have analytic strength, and aren’t too politically mindkilled.
I think this idea is worth exploring. The first bit seems pretty easy to convey and get people to listen to:
But the second bit
sounds like a very bitter pill to swallow, and therefore hard to get people to listen to.
I think motivated reasoning effects turn our attention quickly away from ideas we think are “bad” on an emotional level. These might thought of as low-level ugh fields around those concepts. Steve Byrnes excellent work on valence in the brain and mind can be read as an explanation for motivated reasoning and resulting polarization, and I highly recommend doing so. I had reached essentially identical conclusions after some years of studying cognitive biases from the perspective of brain mechanisms, but I haven’t yet gotten around to the substantial task of writing it up well enough to be useful. I think it’s by far the most important cognitive bias. Scott Alexander says in his review of the scout mindset:
I think this is half right; motivated reasoning overlaps highly with confirmation bias, since most (all) of the things we believe currently are things we think are good to believe. But it’s subtly different, particularly when we think about how to work around it, either in our own minds or when communicating with others.
For instance, deep canvassing appears to sidestep motivated reasoning by focusing on a personal connection, and to actually work to change minds on political issues (at least acceptance of LGBTQ issues), and according to scant available data it seems like the best known method for actually changing beliefs. It works on an emotional level, presenting no arguments, just a pleasant conversation with someone from a group. It lets people do the work of changing their own minds—as an honest, rational approach should. The specifics of deep canvassing might be limited to opinions about groups, but its success might be a guide to developing other approaches. Not directly asking someone to consider adopting a belief they dislike on an instinctive/unconscious level seems like a sensible starting point.
Applying that to your specific proposal: Perhaps something more like “Y-voters are not so much in favor of Y, as against X … You probably agree that X can be a problem; they’re just estimating it as way worse than you are. Here are some things they’re worried about. Maybe they’re wrong that those things could easily happen, but you can see why they’d want to prevent those things from happening.”
This might work for some people, since they don’t actually like the possible consequences, and don’t have strong beliefs about the abstract or complex theories of how those very bad outcomes might come to pass.
That might still set of emotional/valence alarms if it brings up the concept of giving ground to ones’ opponents.
Anyway, I think it’s possible to create useful political/cognitive discourse if it’s done carefully and with an understanding of the psychological forces involved. I’d be interested in being involved if some LWers want to workshop ideas along these lines.