“Don’t make us look bad” is a powerful coordination problem which can have negative effects on a movement. Examples:
Veganism has a bad reputation of being holier than thou. It’s hard to be a vegan without getting lumped in with “those vegans”. So, it’s hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT’s article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example—juxtaposing “MIRI” and “NRx” implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as “EA” to the same extent because “the other EAs are making them look bad”.
A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal.
And that’s—coordination’s a very hard thing to do. People have very strong incentives to defect. If you’re an activist going out and saying a very controversial thing, putting it out there in the most controversial, least favorable light so that you get a lot of negative attention. That’s mostly good for you. That’s how you get attention. It helps your career. It’s how you get foundation money. [...]
And we really noticed that all of these campaigns, other than, I guess, Joe Biden, were embracing these really unpopular things. Not just stuff around immigration, but something like half the candidates who ran for president endorsed reparations, which would have been unthinkable, it would have been like a subject of a joke four years ago. And so we were trying to figure out, why did that happen? [...]
But we went and we tested these things. It turns out these unpopular issues were also bad in the primary. The median primary voter is like 58 years old. Probably the modal primary voter is a 58-year-old black woman. And they’re not super interested in a lot of these radical sweeping policies that are out there.
And so the question was, “Why was this happening?” I think the answer was that there was this pipeline of pushing out something that was controversial and getting a ton of attention on Twitter. The people who work at news stations—because old people watch a lot of TV—read Twitter, because the people who run MSNBC are all 28-year-olds. And then that leads to bookings. And so that was the strategy that was going on. And it just shows that there are these incredible incentives to defect.
One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are “making him look bad”, even if his and AOC’s goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)
Are there more examples of “don’t make us look bad” coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement?
How much to extreme people harm movements? What affects this?
For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress.
In veganism/EA, maybe extreme voices have less of a negative effect because there aren’t as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the “don’t make us look bad” factor is less here.
“Don’t make us look bad” is a powerful coordination problem which can have negative effects on a movement. Examples:
Veganism has a bad reputation of being holier than thou. It’s hard to be a vegan without getting lumped in with “those vegans”. So, it’s hard to be open about being a vegan, which makes making veganism more socially acceptable tricky.
Ideas perceived as crazy are connected to the EA movement. For example, EAs discuss the possibility that we are living in a simulation seriously. So do flat earthers. Similarly, outsiders could dismiss EA as being too crazy for many other superficial reasons. The NYT’s article on Scott Alexander (https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html) sort of acts as an example—juxtaposing “MIRI” and “NRx” implicitly undermines the credibility of AI Safety research. EAs trying to work in public policy for example might not want to publicly identify as “EA” to the same extent because “the other EAs are making them look bad”.
A person who is part of a movement does something controversial. It makes the movement look bad. For example, longevity has been getting negative press due to the Aubrey de Grey scandal.
The coordination problems the US democratic party faces, described by David Shor in this Rationally Speaking podcast episode (http://rationallyspeakingpodcast.org/wp-content/uploads/2020/11/rs248transcript.pdf).
One takeaway: a moderate democrat like Joe Biden suffers because the crazier looking democrats like AOC are “making him look bad”, even if his and AOC’s goals are largely aligned. I can only assume that the republican party faces similar issues (not discussed in this podcast episode though)
Are there more examples of “don’t make us look bad” coordination problems like these? Any examples of overcoming this pressure and succeeding as a movement?
How much to extreme people harm movements? What affects this?
For example, in politics, there are a few high-stakes all or nothing elections, where having extreme people quiet down could be beneficial to a particular party. On the other hand, no extreme voices could mean no progress.
In veganism/EA, maybe extreme voices have less of a negative effect because there aren’t as many high-stakes all or nothing opportunities. Instead, a bunch of decentralized actors do stuff. Clearly so far EAs seem to be doing fine interfacing with governments (e.g. CSET) so maybe the “don’t make us look bad” factor is less here.
This seems interesting and important.