Tell it to someone who doesn’t care
Followup to Marketing rationalism
Target fence-sitters
American culture frames issues as debates between two sides. The inefficacy of debates is amazing. You can attend debates on a subject for years without ever seeing anyone change their mind. I think this is because of who attends debates. People listen to debates because they care about the issue. And they only care about the issue because they’ve already taken a side. Caring then innoculates them to reason.
If the debate really can be approximated by a binary decision, then the people you want to talk to are the fence-sitters. And they aren’t there.
This reminded me of my “wound-healing” theory of international aid. I’ll float a similar idea for social debate: In order to win society over to a view in the long run, you should target the people who don’t care much one way or the other. Politicians already do this.
So how do you get them to listen? They won’t come to your debate, or your conference, or your website. Here are some ways:
Fiction. Think Atlas Shrugged. People will read a novel or watch a movie even if they’re not interested in the issues that it’s about.
The Christian church teaches that evangelism happens through friends. Big church-tent revival meetings have been effective now and then; but most conversions happen one person at a time.
Marketing. There’s a much-larger-than-multibillion-dollar industry that does nothing except try to solve the problem of selling things to people who aren’t interested in them. Too bad I don’t know anything about it.
Teaching kids. (We like to say that a child should grow up to the age where they can make their own decisions. But the unbiased child is a myth.) The public consensus is already to teach science rather than religion in school. I’m happy with that consensus and don’t think it needs to be pushed further by teaching rationalist ideology.
(When I combine this theory with the observation that most people don’t change their worldview or their preferences much after the age of maybe 15, I come up with the idea that most cultural change is driven by the random drift of the opinions of children.)
Gravitational debate
But there are many instances of inspirational books targeted at people already well on one side of an issue, that inspired people to action, or had a strong influence on people without flipping them from a 0 to a 1. The God Delusion, for example; or Schrödinger’s What is Life?.
So here’s theory number 2: The gravitational model of debate. People adjust their opinions in response to the opinions of the people around them. If a lot of the people around Jack shift their opinions to the right, Jack is likely to shift his opinion to the right. I suspect that Jack is more sensitive to opinions similar to his, than to opinions far away. So, like gravity, the strength of the attraction falls off with distance. An opinion sufficiently different from your own is repelling; it invokes an outgroup response rather than an attractive ingroup response. Rush Limbaugh causes some people to shift further left. We could posit a gravitational attraction between opinions that varies from positive at close range, to negative at long range.
The consequences of this model are that, by shifting anyone’s opinion in one direction, you may trigger a cascade of opinion-shifts that will move the median1 opinion. This says you can write a book targeted anywhere on a spectrum of opinion, and have it effect the entire spectrum indirectly, moving some people from one side of the fence to the other even though they never heard of your book.
One consequence is that, as in tug-of-war voting, it’s rational to try to persuade extremists to be even more extreme than you think is rational, in order to shift the median opinion in your chosen direction. (It might not be the most effective use of your time).
Another consequence is that your book might not influence the masses if the distribution of opinions in opinion-space has large gaps. If, for instance, you write a rah-rah transhumanist book, this might have no effect on the population at large if few people have a partly-positive view of transhumanism—even if the gap in opinion-space isn’t where your targeted audience would be. If the gap is large, your book might move median opinion farther away from your position. The Nazis had a tremendous effect on later 20th century philosophy, and perhaps art—but not in the way they would have liked.
This model works best for emotional issues, or regulatory issues, in which one’s position can be expressed by a real number or vector. In an academic debate, if you have n competing hypothesis, the range of possible positions is discrete; and opinion space probably isn’t a metric space.
Compare and contrast?
These two models make nearly opposite recommendations on how to influence public opinion. The first says to use marketing to target people who don’t care. The second says (approximately) to examine the distribution of opinions, and express an opinion near a large mass of opinions, in the same direction as the vector from the median opinion to your desired opinion.
I think both models have some truth to them. But which accounts for more of our behavior; and when should you use which model?
1 (The median opinion is more relevant than the mean with one-person one-vote. The mean is more relevant with voting systems that let people express the strength of their opinions.)
Google “Overton Window”.
Very interesting. Wikipedia: “Priming the public with fringe ideas intended to be and remain unacceptable, will make the real target ideas seem more acceptable by comparison.” This is a slightly different strategy than the Daily Kos article describes.
What I read doesn’t say this, but I think part of the Republican Overton window strategy is to have lots of loose cannons like Rush Limbaugh who state extreme positions. The Republican party can say “You naughty boy, Rush!” and disclaim everything he says, but still benefit from it.
This also shows the dangers of such a method—if Rush gets too powerful, it goes from “You naughty boy, Rush!” to “You naughty boy, critic of Rush!”, like what’s happening now with respect to Michael Steele. And too much extremism can result in evaporative cooling.
Evaporative cooling?
Evaporative cooling.
Why would the moderates, rather than the extremists, be the high-energy particles?
Because otherwise the metaphor doesn’t work.
They don’t necessarily have to be—but then it should be easy to identify the radicals, as they go off to start their own groups.
If there is another group close to the moderates, say another political party, they may go join that one since that would be more straightforward than waiting for the extremists (who also may have more affective attachment to the label) to leave.
If this is a 3rd theory, a working model that is consistent with all them would include short range attraction (group response), medium length repulsion (outgroup response) and long-range attraction (going closer isn’t so bad because you’re so far from the fringe).
Interesting feature of dragging the window: the tactics that move it are aimed outside it. They’re completely unsaleable—but still have a factual impact.
several thousand years of recorded history indicate to me that rhetoric trumps rationality in every situation when you’re talking about influencing people.
the “best” strategy in terms of actual conversion numbers might be to use rhetoric to get people interested and then taper off as they become more interested. Live forever! Outwit your rivals and get rich! this will become easier if some of us get rich. preferably all of us.
“People listen to debates because they care about the issue. And they only care about the issue because they’ve already taken a side. Caring then innoculates them to reason.”
This conflicts with my personal experience so badly, I’m not sure what to make of it. When I see pamphlets for debates at my local university campus, the only ones that sound exciting are ones that both sides of the debate sound reasonable. Also, I have changed my mind about important political issues twice as the result of listening to a debate. Obviously, this is simply anecdotal evidence, but without showing me some kind of data on the subject I will continue to trust my own experience.
Your “wound healing theory” seems to argue that you should focus on very-nearly-converts embedded amongst converts.
I’m not big on your wound-healing theory. Just because it doesn’t work for platelets doesn’t mean it doesn’t work for societies. If you want to convert me you’ll have to do more than just give an analogy.
It’s a better metaphor for international aid than it is for debate. I can’t think of empirical tests with existing data. What would you think of a mathematical model that relied on a set of reasonable assumptions? Not that I have one at present.
Actually, I noticed that you did a decent job of defending your theory in that thread. But I dislike applying the idea of wounds to the entire idea of helping countries that are only marginally bad first. You could say something like “Aid for a marginally bad country will bring more return on your dollar than aid in a really bad country. And improving marginally bad countries that are near really bad countries will provide them with a model for improvement. It’s a little like a big wound heals: you start at the edges.” I’m not going to become a proponent without doing research, but it would be more tolerable imho.
I’d be interested to see a mathematical model to support your theory because I have no idea what it would look like.
You can think of the model with people as beads on a string that allows an easy thought experiment to hypothesize what happens with different strategies.
Building the Model: Suppose that there are two answers to the debate (yes/no ). Assign ‘yes’ a positive value and ‘no’ a negative value. A high ‘yes’ value corresponds to strong conviction that ‘yes’ is the answer and vice-versa for ‘no’. Fence-sitters would be right at ‘0’, with no conviction either way. As noted above, the debate position of a person is simply described by a number on the real line. Your models propose that if a listener has position ‘x’ then their response to position ‘y’ will be to gravitate towards y if |x-y| is small and be repelled away from y if |x-y| is large. (Local attraction but long-distance repulsion).
If you accept the model above, you can write down equations but a thought experiment works to some extent: imagine many beads randomly arranged on a line. Generally they pull and repulse each other, so suppose they are in equilibrium. (...if possible, see PhilGoetz’s comment.) Then a “debater” placed anywhere along that line will have the effect of clustering nearby beads but scattering far beads away (an effect that increases with distance).
Using this thought experiment, you may determine that some kind of “collect” and “sweep” approach is best.
That sounds right. What does “collect and sweep” mean?
Communication patterns also plays a role. Extreme views can be prevented from causing much repelling by presenting them in focused media outlets that people with opposing views are unlikely to hear.
If this is an acceptable model for the second theory, then you can see that the first theory is simultaneously accommodated: a person’s affiliation (yes/no) is defined by their direction relative to zero, so the people on the fence (at 0) require only a nudge to have the desired affiliation. So if you want to increase the number of people who have a certain affiliation, aiming your persuasion at people near 0 is the best way to do that. On the other hand, if you just care about moving the center of mass in the correct direction, it is no less effective to target another position.
The problem is that the people near 0 won’t read your book. The gravitational model suggests that you can write a book on a topic targeted at people near k>0, and affect more people near 0 than by targeting them directly.
What do you mean by “collect and sweep”?
Placing a debater at position y will “collect” people with nearby views. Then the debater should “sweep” them in the direction he wants them to go by moving his arguments in that direction.
These models aren’t as different as they first appear. Just different formulations of the concept of cognitive volatility.
In either case, you want to exert your effort on the largest portion of the population most likely to change in the most significantly beneficial way (for some balancing of optimality). The clever point about re-envisioning these fence-sitters with the second model is that they may not be sitting on any fence you recognize, but the fact remains they are on an unstable equilibrium in the cognitive/social space.
Of course the hard problem is how do you measure the critical susceptibility to having minds changed as a group, before it happens?
It might be a stable equilibrium before the introduction of your opinion.
Not if you consider opinions to be perturbations.
Now if you have opinions that are specially geared to be earth-shattering revelations to your target audience, that is an entirely different matter, with its own set of problems.
Interesting.
I don’t really think 2 is a deliberate strategy as much as the way things turn out. Richard Dawkins writes a book on atheism because he thinks people need to know about it and society should talk about it more and so on. The only people who buy it are the atheists, because they want to signal their atheism and the Christians don’t want to read something they disagree with any more than we’d pick up a book on creationism.
It sounds like in your first paragraph about debates, you’re saying a lot of people target their arguments ineffectively. Any theories on why that is, or whether there are specific biases involved?
I think that the main purpose of debates, in the minds of their sponsors, is to entertain rather than to inform. The news network that sets up a debate chooses people who will make colorful accusations against each other. They would be disappointed if the debaters reached agreement halfway through the debate.
A secondary purpose of debates may be for each side to educate and motivate its base.
I think your secondary purpose is actually the primary purpose, excluding sponsors, who I agree, usually set up the debate for entertainment.
Even if both sides claim that changing minds is the purpose, the actions show otherwise. The “change minds” or “reveal the truth” is a convenient lie, and one that’s actually believed. Plus, it would be tacky and uncivilized to state the real reason for the debate, best to claim a more noble imperative—and believe it.
Depending on how polarized the sides are, the audience is either mostly, or completely going there to watch a fight and root for their team. Although the audience may respect the other side if they play well, they’re rooting for their side getting in some choice jabs, resulting in a KO. I don’t think, leading up to the event, any side of a debate actually says “this will really change some minds!” No, it’s usually “we’re going to show them why we’re right and they’re wrong,” or some more sophisticated equivalent of “it’s beat-down time.”
I will grant that if one side comes off as incredibly foolish, some may abandon it, but how often does that happen? Betting on a side already makes a person more confident of that choice being the right one.
Dawkins and other aggressive-in-that-way atheists irritate me: they’re being very irrational about either their purpose or their approach. If they want more rational atheists, their chosen methods are very poor. If they just want to have pride in being right, and rallying their base, they shouldn’t keep up a pretense of spreading rational atheism. They want both, but they can’t have it.
I want to make a longer and more focused reply on what I see as the core questions: how can we change minds, and how should we? I’ve been wanting to tackle it here for a while, but I have trouble keeping up with this site.
Like I said, entertainment.
Theory 2 proposes a way that rallying the base spreads atheism.
And I said rational atheism, not atheism.
Granted, I didn’t express my thoughts on that clearly. I think there is a fundamental difference between attempting to get someone to agree with your opinions and helping them develop rationality—likely through both similar and different opinions. I think the latter is a better moral play, and it’s genuine.
What is the higher priority result for a rational atheist targeting a theist:
a more rational theist
a not-any-more-rational-than-before atheist
an irrational agnostic
I think the biggest “win” of the results is the first. But groups claiming to favor rationality most still get stuck on opinions all the time (cryogenics comes to mind). Groups tend to congregate around opinions, and forcing that opinion becomes the group agenda, even though they believe it’s the right and rational opinion to have. It’s hard, because a group that shares few opinions is hardly a group at all, but the sharing of opinions and exclusion of those with different opinions works against an agenda of rationality. And shouldn’t that be the agenda of a rational atheist?
I think the “people who don’t care” of your 1st theory are either 1) unimportant or 2) don’t exist, depending on the meaning of the phrase.
I think theory 2 makes a fatal mistake, it emphasizes the cart (opinion) rather than the horse (rational thinking). I’m willing to grant they’re not so separate and cut-and-dry, but I wanted to illustrate the distinction I see.
It just occurred to me that it might be impossible to construct a gravitational model that had any stable equilibriums.
Well, everyone sharing the exact same opinion would be stable.
Embed the particles in a viscous medium. Add partilce decay after 70 years and particle creation at random locations. That should give you the possibility of stable distributions of average density over a lower level of continual flux.
Particle creation should probably have a tendency to occur near other particles.
(slightly) viscous medium makes sense though, maybe.
I’m not sure the model should have stable equilibriums. ie, I’m not sure the way people actually behave is such that a model with stable equilibriums would actually accurately reflect reality. In real life, attitudes change over time, no? So do we really want to select/reject models based on whether or not there are stable equilibria? We should probably expect states that are stable...ish. That is, chunks of the system that are slow to change.