Ten Modes of Culture War Discourse
Overview
This article is an extended reply to Scott Alexander’s Conflict vs. Mistake.
Whenever the topic has come up in the past, I have always said I lean more towards conflict theory over mistake theory; however, on revisiting the original article, I realize that either I’ve been using those terms in a confusing way, and/or the usage of the terms has morphed in such a way that confusion is inevitable. My opinion now is that the conflict/mistake dichotomy is overly simplistic because:
One will generally have different kinds of conversations with different people at different times. I may adopt a “mistake” stance when talking with someone who’s already on board with our shared goal X, where we try to figure out how best to achieve X; but then later adopt a “conflict” stance with someone who thinks X is bad. Nobody is a “mistake theorist” or “conflict theorist” simpliciter; the proper object of analysis is conversations, not persons or theories.
It conflates the distinct questions “What am I doing when I approach conversations?” and “What do I think other people are doing when they approach conversations?”, assuming that they must always have the same answer, which is often not the case.
It has trouble accounting for conversations where the meta-level question “What kind of conversation are we having right now?” is itself one of the matters in dispute.
Instead, I suggest a model where there are 10 distinct modes of discourse, which are defined by which of the 16 roles each participant occupies in the conversation. The interplay between these modes, and the extent to which people may falsely believe themselves to occupy a certain role while in fact they occupy another, is (in my view) a more helpful way of understanding the issues raised in the Conflict/Mistake article.
The chart
Explanation of the chart
The bold labels in the chart are discursive roles. The roles are defined entirely by the mode of discourse they participate in (marked with the double lines), so for example there’s no such thing as a “Troll/Wormtongue discourse,” since the role of Troll only exists as part of a Feeder/Troll discourse, and Wormtongue as part of Quokka/Wormtongue. For the same reason, you can’t say that someone “is a Quokka” full stop. (It’s almost inevitable that people will try to interpret the roles in this way, as if they were personality archetypes, so I’ll emphasize again that this is not what a role is—the same person may adopt different roles from one situation to the next.)
The roles are placed into quadrants based on which stance (sincere/insincere friendship/enmity) the person playing that role is taking towards their conversation partner.
The double arrows connect confusable roles—someone who is in fact playing one role might mistakenly believe they’re playing the other, and vice-versa. The one-way arrows indicate one-way confusions—the person playing the role at the open end will always believe that they’re playing the role at the pointed end, and never vice-versa. In other words, you will never think of yourself as occupying the role of Mule, Cassandra, Quokka, or Feeder (at least not while it’s happening, although you may later realize it in retrospect).
Constructing the model
This model is not an empirical catalogue of conversations I’ve personally seen out in the wild, but an a priori derivation from a few basic assumptions. While in some regards this is a point in its favor, it’s also it weakness—there are certain modes of discourse that the model “predicts” must exist, but where I have trouble thinking of any real-world examples, or even imagining hypothetically how such a conversation might go.
Four stances
We will start with the most basic kind of conversation—Alice and Bob are discussing some issue, and there are no other parties. On Alice’s part, we can ask two questions:
Does Alice think that her and Bob’s fundamental values are aligned, or does she think they’re unaligned?
Does Alice say that her and Bob’s fundamental values are aligned, or does she say they’re unaligned?
Answering both questions creates a 2×2 grid with the 4 stances that Alice can adopt:
Sincere Friendship (SF): Alice says that Bob’s values are aligned with her own, and means it.
Insincere Friendship (IF): Alice tells Bob that their values are aligned, but this is not what she actually thinks.
Insincere Enmity (IE): Alice tells Bob that their values are unaligned, but this is not what she actually thinks.
Sincere Enmity (SE): Alice says that Bob’s values are unaligned with her own, and means it.
What do we mean by “sincerity”?
When we ask “Does Alice think...,” we are sweeping a lot of complexity under the rug and effectively treating her mind as a black-box with no internal structure. We are taking a behaviorist/functionalist approach (“X is as X does”) and leaving aside all questions of self-deception, motivated reasoning, elephant/rider relations, etc. So when we consider what Alice says aloud versus what the whole of her “elephant+rider+whatever apparatus” thinks, if the two line up, we say she’s being “sincere,” and if not “insincere”.
This is obviously a gross oversimplification, but I think it’s reasonable here because (a) it’s necessary for keeping the already large number of combinations manageable, and (b) when you’re conversing with someone, you often don’t really care what’s going on inside their head; what you want to know is what kinds of responses to expect from whatever you say to them.
An example to illustrate the point:
At your company, you’ve been working on project X for several months, and your boss comes to you and says, “We’re considering scrapping project X in favor of Y because we think Y serves our company’s needs better. Here are all the reasons why we think this: [presents evidence]. What do you think?” As you consider the evidence, you get a sinking feeling in your gut. Project X is your baby, and if it successfully launches you’ll get a promotion and everyone will heap praise upon you for having the vision to spearhead it. But deep in your heart-of-hearts you see your boss is right—Y really is better. Your rationalization faculty gets hard at work, picking apart the evidence and coming up with arguments why X really is better. These arguments are so convincing that, by the time you open your mouth to reply, you really do believe them.
This is still called “insincerity” in the current framework, because the effect from your boss’s perspective is the same as if you were deliberately lying—i.e. your boss should discount the truth-trackingness of your arguments in the same way.
Limitation to two-party discussions
As mentioned, we are only considering two-party discussions. Three- or more-party discourse is not covered, such as:
Two people trying to persuade an audience (real or imagined)
Candidates on a debate stage pandering to their respective bases
Alice pretending to disagree with Bob to distract Carol from the fact that Alice and Bob are secretly allied against her
(I might be able to get into those cases in a follow-up article, but let’s keep it simple for now.)
However, the case of what you might call a “1½-party discussion” (where the speaker aims their message at a particular listener or group of listeners, but they’re not in a position to respond) is similar enough to two-party that we can still accommodate it here.
Sixteen roles / ten modes
Now, we can also ask the same questions to determine which stance Bob is employing. This means we now have a 4×4 grid with 16 roles which they may each occupy. The respective pairs of roles define the 10 possible modes of discourse. (There are only 10, because 6 of these pairings are just the same as others with Alice and Bob reversed, so we don’t need to consider them separately.)
In the 4 symmetric modes, both Alice and Bob take the same stance towards each other and thus play the same role:
SF/SF (Collegial): Alice and Bob are sincerely working together to try to figure out what will bring about the best outcome for both of them.
IF/IF (Chameleonic): Alice and Bob pretend to be working for the same goal but in fact each one has their own hidden agenda they’re trying to promote.
IE/IE (Chavrusic): Alice and Bob seem to be at each other’s throats but they really are ultimately on the same page, even though they can’t or won’t acknowledge this.
SE/SE (Antagonistic): Alice and Bob are openly hostile to each other, and so the conversation doesn’t even pretend to be about anything other than bashing the other side.
In the 6 asymmetric modes, Alice and Bob take different stances and thus play different roles. (Here, Alice’s stance or role is given first, followed by a slash, then Bob’s):
SF/IF (Quokka/Wormtongue): Alice (the Quokka) is fooled into thinking that Bob’s contributions are honest attempts at truth-finding, when in fact Bob (the Wormtongue) is manipulating her into believing false things when it serves his own interest for her to do so.
SF/SE (Cassandra/Mule): Alice (the Cassandra) is frustrated that Bob can’t see that she’s trying to help both of them, while Bob (the Mule) resolutely ignores anything Alice says because he doesn’t want to be deceived.
SF/IE (Guru/Rebel): Alice (the Guru) is trying to help Bob see that they aren’t really enemies, and Bob (the Rebel) is willing to engage because it seems like there’s something to what she’s saying even though on the surface he thinks she’s wrong.
IF/SE (Siren/Sailor): Alice (the Siren) pretends to be on Bob’s side in order to trick him into doing something that’s actually in her interest and against his own, but Bob (the Sailor) refuses to engage with Alice because he wants to stay focused on opposing her.
SE/IE (Feeder/Troll): Alice (the Feeder) thinks she’s fighting against this terrible person Bob, but Bob (the Troll) doesn’t actually disagree with her, and is just arguing as an intellectual exercise, a form of entertainment, etc.
(I’m somewhat uncertain about this description.)
IF/IE (Yandere/Tsundere): Alice (the Yandere) pretends to like Bob but in fact is trying to manipulate him into doing what she wants, while Bob (the Tsundere) pretends to hate Alice but in fact is totally on-board with her agenda.
This description is a bit of a joke—I can’t even imagine what this mode would look like, let alone think of any real-world examples. This is the place where the model diverges most from reality and common sense. If we still want to salvage the model, my working theory is that the Yandere/Tsundere discourse is “unstable” in the sense that if a conversation enters this mode, even if the participants don’t realize it, the conversation will become so incoherent that they’ll either stop talking, or shift into a different mode.
Explaining confusability
Two roles may be confused for one another when they differ only in their counterpart’s sincerity. In other words, you know (a) whether you’re being sincere or insincere, (b) whether you’re expressing friendship or enmity, and (c) whether your counterpart is expressing friendship or enmity; but you can’t really be sure of (d) whether your counterpart is being sincere or insincere. By toggling this unknown bit, you can see that there are two roles you might be playing, which, from your perspective, seem identical. So, for example, perhaps Alice thinks she’s playing Wormtongue to Bob’s Quokka. But maybe Bob is just as savvy as she is and is also manipulating her in return, which would make it a Chameleonic discourse. So, Alice is never entirely sure whether she’s being a Wormtongue or a Chameleon.
However, four of the confusability relations are one-way only, because in each pair there’s a “chump” role that nobody would intentionally take on:
Quokka: Alice wouldn’t offer sincere friendship to Bob if she knew he was cynically manipulating her. Therefore, she plays the role of Quokka only because she believes the discourse is Collegial.
Cassandra/Mule: If Alice knew she were talking to a brick wall, she would give up; and if Bob knew Alice was trying to help, he would actually listen.
This has the unique property of being a “double-chump” mode, where neither participant would want to continue in their role if they knew what was going on. It can only happen when both parties are simultaneously mistaken—Alice (the Cassandra) thinks she’s the Guru who is trying to get through to the Rebel Bob, while Bob (the Mule) thinks he’s the Sailor who is heroically resisting the manipulation of the Siren Alice.
Feeder: Alice wouldn’t be attacking Bob so bitterly if she knew he didn’t actually care about what he was saying. Therefore, she plays the role of Feeder only because she believes the discourse is Antagonistic.
But granted, there’s something strangely arbitrary about these one-wayness arguments. Sure, the placement of the one-way arrows makes a nice symmetry on the chart, but is there some underlying principle behind which confusions are one-way and which are two-way? Is it possible that some people will reject the above arguments and insist “No, I’m going to be (Quokka, Cassandra, Mule, Feeder) intentionally”? Why can’t we come up with similar arguments for the other four confusions? (Or can we?)
Prior art
This is not the first attempt at a taxonomy of discourse types. See also:
Conversational Cultures: Combat vs Nurture (V2) and Combat vs Nurture & Meta-Contrarianism—“Combat vs Nurture” posits a spectrum between what I would call Collegial and Chavrusic discourse. A mismatch in expectations along this spectrum between the two conversation partners would then be explained as a Guru/Rebel discourse that shifts into Feeder/Troll because the “nurturer” doesn’t realize that the “combatant’s” enmity is actually insincere.
(Admittedly, this explanation does seem a bit forced.)
I specifically avoided using the term “combat” in this article so as to not overlap with the usage there, but I think the description of the “chavrusa” is pretty close to what IE/IE brings to mind.
Simulacra Levels and their Interactions—Level 1 is Collegial, while levels 2 and 3 roughly correspond to Quokka/Wormtongue and Chameleonic respectively. As for level 4, I suppose the models are incompatible here since statements on this level aren’t really “truth-apt propositions” at all.
But the 10-mode model also doesn’t capture certain subtleties such as the perspective of third-parties, and the difference between lying for the purpose of spreading false beliefs versus lying for the purpose of signalling group membership (a distinction which is elided by the behaviorist definition of “sincerity” above).
The term “Quokka” comes from this infamous post [Twitter] which I hadn’t actually read until now, but which I had heard about through the grapevine. Again, the roles aren’t supposed to be archetypes, and I don’t think there are many people who act like Quokkas in all situations; indeed, the concept is more useful as a label for something that people try to avoid being.
Summary of open questions
What exactly is going on with Yandere/Tsundere? Do such conversations ever occur in practice? Are they even imaginable?
It’s clear that your judgment of whether your values are aligned with the other person’s may change over the course of the conversation as you learn more about what their values actually are. Can this model capture that? If there’s a certain kind of conversation that invariably follows the same sequence of modes, then perhaps that’s a more empirically-valid category than these 10 modes.
How do we classify “agreeing for the wrong reasons”? Suppose Alice is leading a crusade against high-fructose-corn-syrup because she thinks it’s a plot by the Illuminati to turn everyone into lizardpeople, while Bob thinks to himself “Well, her heart’s in the right place; we’d all be healthier if we consumed less HFCS” and so he joins Alice’s group while going along with the Illuminati story to avoid starting a pointless debate with her. What is this? I guess this is a certain flavor of Collegial, with bits of Rebel/Guru to the extent that Bob tries to subtly manipulate Alice into agreeing with him for the right reasons. (But this may be a situation where the simulacrum framework is more helpful.)
Are the four one-way-confusability arguments compelling? Are there any other confusions which are really one-way?
Is “Feeder/Troll” really the best way of characterizing SE/IE? The term “Troll” is maybe fraught with connotations of nihilism, and it’s not clear how nihilism fits in here. (In theory, a nihilist has no friends or enemies.)
In general, how should we understand the Insincere Enmity stance? It seems pretty obvious what the other three stances mean, but this one gives rise to confusion.
Sincerity and value-alignment aren’t binary, but a continuum. Does it make sense to simplify them to yes-or-no questions?
Revisiting the original article
In this section I’m going to use the terms “mistake theory/-ist” and “conflict theory/-ist” in the way they’re used in each respective quote that I’m responding to, even though it’s not clear whether they have the same meaning in each quote, and I would prefer to avoid using the terms at all (as mentioned earlier).
Cassandra/Mule discourse is the most frustrating kind
Mistake theorists naturally think conflict theorists are making a mistake. On the object level, they’re not smart enough to realize that new trade deals are for the good of all, or that smashing the state would actually lead to mass famine and disaster.
Conflict theorists naturally think mistake theorists are the enemy in their conflict. On the object level, maybe they’re directly working for the Koch Brothers or the American Enterprise Institute or whoever.
Alice: … So, that’s the policy proposal and why it’ll make us both better off. It’s obvious! Why won’t you see reason?
Bob: No, shut up, evil scum! I don’t believe you. You must be some kind of paid shill. Who do you work for, again?
Alice (thinking): Argh, if only I could get through to him! I am the Guru and he is the Rebel. Once he understands my arguments he’ll realize I’m right and join me in a Collegial discussion where can actually start coming up with solutions to these issues. But he won’t listen! He just wants to drag me into pointless mudslinging, but I refuse to be made the Feeder to his Troll. I just need to stick to the facts.
Bob (thinking): Ugh, I know how this game is played. Alice claims to be on my side so I’ll let my guard down and believe her cherrypicked evidence and motivated reasoning that really serves her interest and not mine. She wants me to become the Quokka to her Wormtongue. But I won’t do it! She is the Siren, but I am the Sailor. If I just keep insulting her then maybe she’ll drop this façade. Then at least I’ll have the satisfaction of an honest Antagonistic discussion.
Of course, it’s possible that in this situation, Alice (i.e. the person referred to here as the “mistake theorist”) is actually correct. And alternatively, it’s also possible that Bob (the “conflict theorist”) is correct. But now we see a third alternative—maybe neither Alice nor Bob are correct, and in fact this is a Cassandra/Mule discourse. Then, the conversation will go nowhere until one or both of them storm off in frustration.
(Now you can see why it was useful to coin all that jargon!)
“I’m not misanthropic, I just don’t like you”
Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. … Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.
Mistake theorists view debate as essential. … Conflict theorists view debate as having a minor clarifying role at best.
To the extreme “mistake theorist”, the world looks like this:
They may therefore project their view onto anyone who disagrees with them (which they call “conflict theorists”) and assume they view the world like this:
But to me, “conflict theory” (= the negation of mistake theory) is nothing more than the acknowledgement that the whole chart exists in the real world—that some conversations are productive, some aren’t, and others are worse than useless. The fact that enemies exist doesn’t mean that friends don’t also exist; the secondary-diagonal view (“Hobbesian individualism”) is an extreme strawman that almost nobody actually believes. In fact, posing it as a “refutation” of conflict theory is sure to raise all kinds of alarm-bells in the minds of anyone with a bit of world-wariness—if they didn’t already have a reason to be skeptical, the use of this classic confidence trick (“It’s not good to be so distrustful of everyone you meet...”) will certainly seal the deal.
In particular, the two extreme “diagonalist” views illustrated above both leave no space for the non-diagonal modes (Cassandra/Mule, Quokka/Wormtongue, and Feeder/Troll at least—although I continue to be unsure of the explanatory value of Yandere/Tsundere), without which the meta-level disagreements alluded to earlier (“What kind of conversation are we having right now?”) cannot be understood.
Free speech
I found this part disorienting when I got to it:
Mistake theorists think that free speech and open debate are vital, the most important things. … Conflict theorists think of free speech and open debate about the same way a 1950s Bircher would treat avowed Soviet agents coming into neighborhoods and trying to convince people of the merits of Communism.
Up until that point I had been mostly siding with the “conflict theorist” in each example, and on the topic of free speech I was thinking to myself: “Yes, obviously, as a conflict theorist I’m pro-free speech. How could I not be? The elite is full of evil people expressing Insincere Friendship, using their position of authority to spread false information, the believing of which will cause people to act in the elites’ interest and not their own. Therefore it’s essential that we have the right to speak up and expose their lies. The only people who could possibly be against this are either mistake theorists (who naïvely think the people doing the censorship will be well-intentioned guards against misinformation) or the Wormtongues who’ve gained their ears.”
The reason for this discrepancy is that “free speech” may be referring to two distinct things. It may be a bit tricky to explain since a full analysis would require a treatment of three-party discussions, but briefly:
In one sense, we have a situation where a bunch of people have come together to work on a goal that they all share (“People for the Promotion of X”), and then Alice joins the club saying “How do you do, fellow pro-X-ers! I have some ideas for how we can achieve X more effectively,” and then proceeds to give a bunch of proposals that are so egregiously bad that they would actually be harmful for X. Then, any “mistake theorists” in the club will say “Let’s hear her out and respond with our counterarguments. If we succeed in convincing Alice, then we’ve gained a more effective supporter of X; if she convinces us, then we can fix our strategy,” whereas the “conflict theorists” will say “No, this person is a bad actor trying to trick us into working contrary to X; she should be ejected from the club.” (Confusingly, this kind of activity is commonly called “concern trolling” although it has nothing to do with the “Troll” role as I’ve defined it here. Oh well; the terminology is overloaded.)
In another sense, however, we might be thinking of free public speech, i.e. speech that’s open for everyone to hear (but which you can tune out if you’re not interested). The mistake theorist will regard this as the same as the club case, because they assume that everyone in the society, just like in the club, is pro-X. I, on the other hand, would respond with the pro-free-speech argument I outlined 3 paragraphs ago. But that’s not because I’m a conflict theorist per se; the argument only makes sense when the conflict is thought to be “me and the audience versus the elites” as opposed to “me and the elites versus the audience.” If one believed the latter, then one would be anti-free-speech for the reason given in the quotation; however, this is only one manifestation of conflict theory, and not the most common one nowadays—at least as far as I can tell!
Personal note: I’ll take a Chavrusa over a Guru any day
This is more of an aesthetic preference than a rational argument, but I personally have a distaste for self-proclaimed Gurus (e.g.). It strikes me as sleazy and evasive. If someone thinks my professed values are bad but that I might be convinced to change them, then I’d much rather they challenge me with a stance of “Your values are bad and here’s why” rather than tell me that those are not, in fact, my values, and that if I looked deep within myself I would realize that giving the Guru all my money was what I wanted to do all along. But hey, that’s just me.
Concluding remarks
As with any other “insight”-style post, you should take this model as a starting point and not a conclusion. Its usefulness will depend on whether you can readily think of examples of the 10 modes of discourse, or whether on the contrary it seems like experience needs to be forced into the model. I happen to think that the 12-role interplay described in the “Cassandra/Mule” section above is an accurate description of the debates I’ve seen between rationalists and non- or post-rationalists. I see Chameleonic discourse in big-city local politics when debates are framed as about what’s best for “the” community, while Chavrusic discourse is what you might get at rationalist parties after having a bit too much to drink.
What do you think?
Great post, I enjoyed it.
Regarding the insincere friendship—insincere enmity relationship, I think a very simple example of this I see all the time is a negotiating relationship between a seller and a buyer. The seller insincerely states that he thinks their values are aligned and for that reason it’s in your interests to buy, and the buyer insincerely states that he doesn’t think values are aligned (even if they are) because they want a lower price.
Regarding free speech, I think there’s a missed complication in how the relationship plays out between conflict theorists. For example, many conservatives (and especially the pro-conflict, culture war conservatives) believe very strongly in the importance of free speech, and not just because they want to maintain their permit to troll. If words are an effective arena of battle for your group then you tend to be in favour of free speech, and if they’ve historically been used against your group then you tend to be against it.
#1 - I hadn’t thought of it in those terms, but that’s a great example.
#2 - I think this relates to the involvement of the third-party audience. Free speech will be “an effective arena of battle for your group” if you think the audience will side with you once they learn the truth about what [outgroup] is up to. Suppose Alice and Bob are the rival groups, and Carol is the audience, and:
Alice/Bob are SE/SE (Antagonist/Antagonist)
Alice/Carol are SF/IE (Guru/Rebel)
Bob/Carol are IF/SE (Siren/Sailor)
If this is really what’s going on, Alice will be in favor of the debate continuing because she thinks it’ll persuade Carol to join her, while Bob is opposed to the debate for the same reason. This is why I personally am pro-free-speech—because I think I’m often in the role of Carol, and supporting free speech is a “tell” for who’s really on my side.
I think it has a lot more to do with status quo preservation than truthseeking. If I’m Martha Corey living in Salem, I’m obviously not going to support the continued investigations into the witching activities of my neighbours and husband, and the last reason for that being the case is fear of the exposed truth that I’ve been casting hexes on the townsfolk all this time.
I think a much simpler explanation is that continued debate increases the chances I’m put on trial, and I’d much rather have the status quo of not debating whether I’m a witch preserved. If it were a social norm in Salem to run annual witching audits on the townsfolk, perhaps I’d support debate for not doing that any more. The witch hunting guild might point a kafkaesque finger at me in return because they’d much rather keep up the audits.
Up stands Elizabeth Hubbard who calmly explains that if no wrongdoing has taken place then no negative consequences will occur, and that she is concerned by the lack of clarity and accountability displayed by those who would shut down such discussions before they’ve even begun.
In your example, what makes Alice (Elizabeth) the guru and Bob (Martha) the siren?
Isn’t the fact that the buyer wants a lower price proof that the seller and buyer’s values aren’t aligned?
In almost all cases, the buyer will grossly exaggerate the degree to which values are not aligned in the hopes of driving the seller down in price. In most cases, the buyer has voluntarily engaged the seller (or even if they haven’t, if they consider the deal worth negotiating then there must be some alignment of values).
Even if I think the price is already acceptable to me, I will still haggle insincerely because of the prospect of an even better deal.
It seems weird to me to call a buyer and seller’s values aligned just because they both prefer outcome A to outcome B, when the buyer prefers C > A > B > D and the seller prefers D > A > B > C, which are almost exactly misaligned. (Here A = sell at current price, B = don’t sell, C = sell at lower price, D = sell at higher price.)
I think the important value here is not the assets changing hands as part of the exchange, but rather the value each party stands to gain from the exchange. Both parties are aligned that shaking hands on the current terms is acceptable to them, but they will both lie about that fact if they think it helps them move towards C or D.
Or to put it another way, in your frame I don’t think any kind of collaboration can ever be in anyone’s interests unless you are aligned in Every Single Thing.
If I save a drowning person, in a mercenary way it is preferable to them that I not only save them but also give them my wallet. Therefore my saving them was not a product of aligned interests (desire to not drown + desire to help others) since the poor fellow must now continue to pay off his credit card debt when his preference is to not do that.
For me, B > A > D > C, and for the drowning man, A > B > C > D (Here A = rescue + give wallet, B = rescue, no wallet, C = no rescue, throw wallet into water, D = walk away)
What matters in the drowning relationship (and the reason for our alignment) is B > C. Whether or not I give him my wallet is an independent variable from whether I save him and the resulting alignment should be considered separately.
In your example, I’m focusing on the alignment of A and B. Both parties will be dishonest about their views on A and B if they think it gets them closer to alignment on C and D. That’s the insincerity.
Hmm, the fact that C and D are even on the table makes it seem less collaborative to me, even if you are only explicitly comparing A and B. But I guess it is kind of subjective.
It’s a question of whether drawing a boundary on the “aligned vs. unaligned” continuum produces an empirically-valid category; and to this end, I think we need to restrict the scope to the issues actually being discussed by the parties, or else every case will land on the “unaligned” side. Here, both parties agree on where they stand vis-a-vis C and D, and so would be “Antagonistic” in any discussion of those options, but since nobody is proposing them, the conversation they actually have shouldn’t be characterized as such.
As I understood it, the whole point is that the buyer is proposing C as an alternative to A and B. Otherwise, there is no advantage to him downplaying how much he prefers A to B / pretending to prefer B to A.
Maybe love things? Or female things?
I’ve seen mules in the wild in internet forums (which, admittedly is outside the scope of your post). They usually present as ardent defenders of the faith, repeating well-known talking points…and never updating, ever.
On the contrary, I’d say internet forum debating is a central example of what I’m talking about.
Do Cassandras always believe they are Gurus? What happens if a Cassandra catches on and tries to convince the Mule they’re being sincere?
This “trying to convince” is where the discussion will inevitably lead, at least if Alice and Bob are somewhat self-aware. After the object-level issues have been tabled and the debate is now about whether Alice is really on Bob’s side, Bob will view this as just another sophisticated trick by Alice. In my experience, Bob-as-the-Mule can only be dislodged when someone other than Alice comes along, who already has a credible stance of sincere friendship towards him, and repeats the same object-level points that Alice made. Only then will Bob realize that his conversation with Alice had been Cassandra/Mule.
(Example I’ve heard: “At first I was indifferent about whether I should get the COVID vaccine, but then I heard [detestable left-wing personalities] saying I should get it, so I decided not to out of spite. Only when [heroic right-wing personality] told me it was safe did I get it.”)