Sometimes large alliances form by conflating concepts that smaller alliances are already amenable to caring about. E.g., people have different notions of justice, but they agree that something-called-justice is very good, so they can agree to team up and fight for justice. The same may be true for ideas like “safety”, “alignment”, “interpretability, and other concepts relevant to readers of LessWrong and the Alignment Forum.
When a conflationary alliance exists, there is a continual tension between
Deconfusing the conflated concept — i.e., learning to specify the different more nuanced or original versions of the concept, before/without conflation, representing concepts that nontrivial factions of the alliance might care about (similar to the dissolving the query), and
Keeping the alliance together — i.e., maintaining a sense that everyone is on the same team and on the same page about what’s important, so that people trust each other and keep working together.
Reading this sequence will hopefully give you more empathy for other people in cases where you feel like they’re using concepts in a sloppy way. I have three motivations for writing it:
I haven’t found scholarly work detailing the dynamics of conflationary alliances, so I’m writing about them here so as to generate a written reference for the topic (albeit not a scholarly one), or to at least attract feedback of the form “Why didn’t you just cite [x]?”
I think when we feel frustrated by the epistemic habits of other people and/or cultural movements, we are often encountering a sociocognitive boundary constructed from a conflated concept. Basically, the social group is sustained by a cognitive pattern that prevents the resolution of disagreements about the meaning of the conflated term. Having better empathy for this phenomenon can help to navigate it more gracefully and productively.
I think conflationary alliances are more common than many LessWrong readers realize, even amongst LessWrong readers. So-called big tent politics, pluralism, and overlapping consensus strategies (per John Rawls) all have a tendency to build conflationary alliances (although not necessarily), and I’ll argue later that the LessWrong community itself is somewhat conflationary.
By describing conflationary alliances, I am not intending to introduce some new kind of deception that people don’t already do. Speaking for myself, I try not to build alliances based on conflation except in cases where I’m genuinely willing to say something acknowledging the conflation, like “Many people here might mean a different thing by X, and I think we should take all of these different meanings seriously”.
Hope this helps!