Something about this piece felt off to me, like I couldn’t see anything specifically wrong with it but still had a strong instinctive prior that lots of things were wrong.
After thinking about it for a bit, I think my main heuristic is: this whole piece sounds like it’s built on a conflict-theory worldview. The whole question of the essay is basically “who should we be angry at”? Based on that, I’d expect that many or most of the individual examples are probably inaccurately understood or poorly analyzed. Lark’s comment about the Wells Fargo case confirms that instinct for one of the examples.
Then I started thinking about the “conflict theory = predictably wrong” heuristic. We say “politics is the mindkiller”, but I don’t think that’s quite right—people have plenty of intelligent discussions about policy, even when those discussions inherently involve politics. “Tribalism is the mindkiller” is another obvious formulation, but I’d also propose “conflict theory is the mindkiller”. Models like “arguments are soldiers” or “our enemies are evil” are the core of Yudkowsky’s original argument for viewing politics as a mind-killer. But these sort of models are essentially synonymous with conflict theory; if we could somehow have a tribalistic or political discussion without those conflict-theoretic elements, I’d expect it wouldn’t be so mindkiller-ish.
Looping back to the main topic of the OP: what would be a more mistake-theoretic way to view the same examples? One theme that jumps out to me is principal-agent problems: when something is outsourced, it’s hard to align incentives. That topic has a whole literature in game theory, and I imagine more useful insight could be had by thinking about how it applies to the examples above, rather than thinking about “moral culpability”—a.k.a. who to be angry at.
I changed my mind about conflict/mistake theory recently, after thinking about Scott’s comments on Zvi’s post. I previously thought that people were either conflict theorists or mistake theorists. But I now do not use it to label people, but instead to label individual theories.
To point to a very public example, I don’t think Sam Harris is a conflict theorist or a mistake theorist, but instead uses different theories to explain different disagreements. I think Sam Harris views any disagreements with people like Stephen Pinker or Daniel Dennett as primarily them making reasoning mistakes, or otherwise failing to notice strong arguments against their position. And I think that Sam Harris views his disagreements with people like <quickly googles Sam Harris controversies> Glenn Greenwald and Ezra Klein as primarily them attacking him for pushing different goals to their tribes.
I previously felt some not-insubstantial pull to pick sides in the conflict vs mistake theorist tribes, but I don’t actually think this is a helpful way of talking, not least because I think that sometimes I will build a mistake theory for why a project failed, and sometimes I will build a conflict theory.
To push back on this part:
Models like “arguments are soldiers” or “our enemies are evil” are the core of Yudkowsky’s original argument for viewing politics as a mind-killer. But these sort of models are essentially synonymous with conflict theory; if we could somehow have a tribalistic or political discussion without those conflict-theoretic elements, I’d expect it wouldn’t be so mindkiller-ish.
“Arguments are soldiers” and “our enemies are evil” are not imaginary phenomena, they exist and people use such ideas regularly, and it’s important that I don’t prevent myself from describing reality accurately when this happens. I should be able to use a conflict theory.
I have a model of a common type of disagreement where people get angry at someone walking in with a mistake theory that goes like this: Alice has some power over Bob, and kinda self-deceives themselves into a situation where it’s right for them to take resources from Bob, and as Bob is getting angry at Alice and tries to form a small political force to punish Alice, then Charlie comes along and is like “No you don’t understand, Alice just made an error of reasoning and if I explain this to them they won’t make that mistake again!” and Bob gets really angry at Charlie and thinks they’re maybe trying to secretly help Alice or else are strikingly oblivious / conflict averse to an unhealthy degree. (Note this is a mistake theory about the disagreement between Bob and Charlie, and a conflict theory about the disagreement between Bob and Alice. And Charlie is wrong to use a mistake theory.)
I think the reason I’m tempted to split mistake and conflict into tribes, is because I do know people that largely fit into one or the other. I knew people at school who always viewed interpersonal conflict as emanating from tribal self-interest, and would view my attempt to show a solution that didn’t require someone being at fault as me trying to make them submit to some kinda weird technicality, and got justifiably irritated. I also know people who are very conflict averse but also have an understanding of the complexity of reality, and so always assume it is merely a principal-agent problem or information flow problem, as opposed to going “Yeah, Alice is just acting out of self-interest here, we need to let her know that’s not okay, and let’s not obfuscate this unnecessarily.” But I think the goal is to have one’s beliefs correspond to reality—to use a conflict theory when that’s true, a mistake theory when that’s true, and not pre-commit to one side or the other regardless of how reality actually is.
I do think that conflict theories are often pretty derailing to bring up when trying to have a meaningful 1-1 public debate, and that it’s good to think carefully about specific norms for how to do such a thing. I do think that straight-up banning them is likely the wrong move though. Well, I think that there are many places where they have no place, such as a math journal. However the mathematical community will need a place to be able to discuss internal politics + norm-violations where these can be raised.
I think the whole “mistake theory vs conflict theory” thing needs to be examined and explained in greater detail, because there is a lot of potential to get confused about things (at least for me). For example:
Both “mistake statements” and “conflict statements” can be held sincerely, or can be lies strategically used against an enemy. For example, I may genuinely believe that X is racist, and then I would desire to make people aware of a danger X poses. The fact that I do not waste time explaining and examining specific details of X’s beliefs is simply because time is a scarce resource, and warning people against a dangerous person is a priority. Or, I may knowingly falsely accuse X of being racist, because I assume that gives me higher probability of winning the tribal fight, compared to a honest debate about our opinions. (Note: The fact that I assume my opponent would win a debate doesn’t necessarily imply that I believe he it right. Maybe his opinions are simply more viral; more compatible with existing biases and prejudices of listeners.) Same goes for the mistake theory: I can sincerely explain how most people are not evil and yet Moloch devours everything; or I may be perfectly aware that the people of my tribe are at this moment fighting for our selfish collective interest, and yet present an ad-hoc theory to confuse the nerds of the opposing tribe into inaction.
Plus, there is always a gray zone between knowingly lying and beliefs sincerely held. Unconscious biases, plausible deniability, all this “this person seems to be genuinely mistaken, but at the same time they resist all attempts to explain” which seems to be the behavior of most people most of the time. This balancing at “aware on some level, but unaware on another level” which allows us to navigate towards achieving our selfish goals while maintaining the image of innocence (including the self-image).
Then, we have different levels of meta. For example, suppose that Alice takes Bob’s apple and eats it. This is a factual description. On the first level, Charlie the conflict theorist might say “she knowingly stole the apple”, while Diana the mistake theorist might say “she just made a mistake and believed the apple was actually hers”. Now on the second level, a conflict theorist could say “of course Charlie accuses Alice of acting badly; he is a misogynist” (conflict explanation of conflict explanation), or “of course Diana would defend Alice; women have a strong in-group bias” (conflict explanation of mistake explanation). A mistake theorist could say “Charlie is a victim of illusion of transparency, just because he noticed the apple belongs to Bob, doesn’t mean Alice had to notice it, too” (mistake explanation of conflict explanation), or “Diana seems to be a nice person who would never steal, and she projects her attitude on Alice” (mistake explanation of mistake explanation). On the third level… well, it gets complicated quickly. And yet, people make models of each other, and make models of models other people have about them, so the higher levels will get constructed.
By the way, notice that “mistake theorists” and “conflict theorists” are not two opposing tribes, in the sense of tribal conflict. The same political tribe may contain both of them: some people believe their opponents are evil, others believe they are making a tragic mistake; both believe the opponents have to be stopped, by force if necessary. There may be conflict theorists on both sides: both explaining why the other side is making a power grab and needs to be stopped; or mistake theorists on both sides: both explaining why the other side is deluded.
...and I feel pretty sure there are other complications that I forgot at the moment.
EDIT:
For example, the conflict theory can be expressed in a mistake-theory lingo. Instead of saying “my evil opponent is just trying to get more power”, say “my uneducated opponent is unaware of his unconscious biases that make him believe that things that get him more power are the right ones”. You accused him of pretty much the same thing, but it makes your statement acceptable among mistake theorists.
I might be missing the forest for the trees, but all of those still feel like they end up making some kinds of predictions based on the model, even if they’re not trivial to test. Something like:
If Alice were informed by some neutral party that she took Bob’s apple, Charlie would predict that she would not show meaningful remorse or try to make up for the damage done beyond trivial gestures like an off-hand “sorry” as well as claiming that some other minor extraction of resources is likely to follow, while Diana would predict that Alice would treat her overreach more seriously when informed of it. Something similar can be done on the meta-level.
None of these are slamdunks, and there are a bunch of reasons why the predictions might turn out exactly as laid out by Charlie or Diana, but that just feels like how Bayesian cookies crumble, and I would definitely expect evidence to accumulate over time in one direction or the other.
Strong opinion weakly held: it feels like an iterated version of this prediction-making and tracking over time is how our native bad actor detection algorithms function. It seems to me that shining more light on this mechanism would be good.
After reading this and the comments you linked, I think people mean several different things by conflict/mistake theory.
I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives). I see mechanism design as the prototypical mistake theory approach: if people are misaligned, then restructure the system to align their incentives. It’s a technical problem, and getting angry at people is usually unhelpful.
In the comment thread you linked, Scott characterizes conflict theory as “the main driver of disagreement is self-interest rather than honest mistakes”. That view matches up more with the example you give: the mistake theorist assumes that people have “good” intent, and if you just explain that their actions are harmful, then they’ll stop. Under this interpretation, mechanism design is conflict-theory-flavored; it’s thinking of people as self-interested and then trying to align them anyway.
(I think part of the confusion is that some people are coming in with the assumption that acting in self-interest is automatically bad, and others are coming in with more of an economic/game theory mindset. Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.)
So I guess one good question to think about is: how do we categorize mechanism design? Is it conflict, is it mistake, is it something else? Different answers correspond to different interpretations of what “conflict” and “mistake” theory mean. I’m pretty sure my interpretation is a much better fit to the examples and explanations in Scott’s original post on the topic, and it seems like a natural categorization to me. On the other hand, it also seems like there’s another natural category of naive-mistake-theorists who just assume honest mistakes, as in your Bob-Charlie example, and apparently some people are using the terms to capture that category.
Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.
Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.
IN DECEMBER 1992, AN OBSCURE ACADEMIC JOURNAL published an article by economists Alexander Tabarrok and Tyler Cowen, titled “The Public Choice Theory of John C. Calhoun.” Tabarrok and Cowen, who teach in the notoriously libertarian economics department at George Mason University, argued that the fire-breathing South Carolinian defender of slaveholders’ rights had anticipated “public choice theory,” the sine qua non of modern libertarian political thought.
...
Astutely picking up on the implications of Buchanan’s doctrine, Tabarrok and Cowen enumerated the affinities public choice shared with Calhoun’s fiercely anti-democratic political thought. Calhoun, like Buchanan a century and a half later, had theorized that majority rule tended to repress a select few. Both Buchanan and Calhoun put forward ideas meant to protect an aggrieved if privileged minority. And just as Calhoun argued that laws should only be approved by a “concurrent majority,” which would grant veto power to a region such as the South, Buchanan posited that laws should only be made by unanimous consent. As Tabarrok and Cowen put it, these two theories had “the same purpose and effect”: they oblige people with different interests to unite—and should these interested parties fail to achieve unanimity, government is paralyzed.
In marking Calhoun’s political philosophy as the crucial antecedent of public choice theory, Tabarrok and Cowen unwittingly confirmed what critics have long maintained: libertarianism is a political philosophy shot through with white supremacy. Public choice theory, a technical language nominally about human behavior and incentives, helps ensure that blacks remain shackled.
...
In her 2017 book, Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America, historian Nancy MacLean argues that Buchanan developed his ideas in service of a Virginia elite hell-bent on preserving Jim Crow.
The overall argument is something like:
Calhoun and Buchanan both had racist agendas (maintaining slavery and segregation). (They may have these agendas due to some combination of personal self-interest and class self-interest)
They promoted ideas about democratic governance (e.g. that majority rule is insufficient) that were largely motivated by these agendas.
These ideas are largely the same as the ones of public choice theory (as pointed out by Cowen and Tabarrok)
Therefore, it is likely that public choice theory is advancing a racist agenda, and continues being advocated partially for this reason.
Overall, this is an argument that personal self-interest, or class self-interest, are driving the promotion of public choice theory. (Such interests and their implications could be studied within economics; though, economics typically avoids discussing group interests except in the context of discrete organizational units such as firms)
Another way of looking at this is:
Economics, mechanism design, public choice theory, etc are meta-level theories about how to handle conflicts of interest.
It would be desirable to have agreement on good meta-level principles in order to resolve object-level conflicts.
However, the choice of meta-level principles (and, the mapping between those principles and reality) is often itself political or politicized.
Therefore, there will be conflicts over these meta-level principles.
Let’s imagine for a minute that we didn’t know any of the background, and just think about what we might have predicted ahead of time.
Frame 1: conflict theory is characterized by the idea that problems mostly come from people following their own self-interest. Not knowing anything else, what do we expect conflict theorists to think about public choice theory—a theory whose central premise is modeling public servants as following their own self-interests/incentives? Like, the third sentence of the wikipedia article is “it is the subset of positive political theory that studies self-interested agents (voters, politicians, bureaucrats) and their interactions”.
If conflict theory is about problems stemming from people following their self-interest, public choice theory ought to be right up the conflict theorist’s alley. This whole “meta-level conflict” thing sounds like a rather contrived post-hoc explanation; a-priori there doesn’t seem to be much reason for all this meta stuff. And conflict theorists in practice seem to be awfully selective about when to go meta, in a way that we wouldn’t predict just based on “problems mostly stem from people following their self-interest”.
On the other hand...
Frame 2: conflict theory is characterized by the idea that bad things mostly happen because of bad people, and the solution is to punish them. In this frame, what would we expect conflict theorists to think of public choice theory?
Well, we’d expect them to dismiss it as obviously wrong—it doesn’t denounce any bad people—and therefore also probably an attempt by bad people to steer things the way they want.
If conflict theory is characterized by “bad things happen because of bad people”, then an article about how racism secretly underlies public choice theory is exactly the sort of thing we’d predict.
I think it’s a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here). You can also see the discussion on No, it’s not The Incentives—it’s you as a disagreement over where this boundary should be.
That said, one thing I’ll point out is that explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It’s saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.
I find for myself that thinking ‘conflict theorists’ is a single coherent group is confusing me, and that I should instead replace the symbol with the substance when I’m tempted to use it, because there are many types of people who sometimes use conflict theories, and it is confusing to lump them in with people who always use them, because they often have different reasons for using them when they do.
To give one example of people who always use it: there are certain people who have for most of their lives found that the main determinant of outcomes for them is political conflict by people above them, who are only really able to understand the world using theories of conflict. They’ve also never gained a real understanding of any of the fascinating and useful different explanations for how social reality works (example, example), or a sense that you often can expand massively rather than fight over existing resources. And when they’re looking at someone bringing in public choice theory to argue one side of a social fight, they get an impression that the person is finding clever arguments for their position, rather than being honest.
(This is a mistake theory of why some people primarily reason using conflict theories. There are conflict theories that explain it as well.)
I think it’s good to be able to describe what such people are doing, and what experiences have lead them to that outlook on life. But I also think that there are many reasons for holding a conflict theory about a situation, and these people are not at all the only examples of people who use such theories regularly.
Added: clone of saturn’s 3 point explanation seems right to me.
I get what you’re saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.
None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one’s own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”. We can explain other peoples’ positions by saying they’re using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default—even if their arguments happen to be correct.
As you say, explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.
Even after all this discussion of theories vs theorists, “conflict theory = predictably wrong” still seems like a solid heuristic.
Sorry for the delay, a lot has happened in the last week.
Let me point to where I disagree with you.
Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”.
My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to thinkandtalkfreely about what is actually happening.
Sam Harris and his aggressive clashes with people like Ezra Klein and Glenn Greenwald.
There’s something very valuable that you’re pointing at, and I agree with a lot of it. There shouldn’t be conflict theories in a math journal. It’s plausible to me there shouldn’t be conflict theories in an economics journal. And it’s plausible to me that the goal should be for the frontpage of LessWrong to be safe from them too, because they do bring major costs in terms of mindkilling nature, and furthermore because several of the above are bullet points are simply off-topic for LessWrong. We’re not here to discuss current-day tribal politics in various institutions, industries and communities.
And if I were writing publicly about any of the above topics, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. Rather than calling them biased or self-interested, I prefer to use the most powerful of rebuttals in the pursuit of truth, which is showing that they’re wrong.
But ruling it out wholly in one’s discourse and life seems way too much. I think there are cases where wholly censoring conflict theories will be far more cost than it’s worth, and that removing them entirely from your discourse will cripple you and allow you to be taken over by outside forces that want your resources.
For example, I can imagine a relatively straightforward implementation of “no conflict theories” in a nearby world meaning that I am not able to say that study after study is suspect, or that a position is being pushed by political actors, unless I first reinvent mechanism theory and a bunch of microeconomics and a large amount of technical language to discuss bias. If I assume the worst about all of the above bullet points, not being able to talk about bad people causing bad things could mean we are forced to believe lots of false study results and ignore a new theory of fundamental physics, plus silence economists, bloggers, and public intellectuals.
The Hanson examples above feel the strongest to me because it’s the one that’s a central example of something that’s able to lead to a universal, deep insight about reality and be a central part of LessWrong’s mission in understanding human rationality, whereas the others are mostly about current tribal politics. But I think they all substantially affect how much to trust our info sources.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
***
Hmm.
I re-read the OP, and realise I actually identify a lot with your initial comment, and that I gave Elizabeth similar feedback when I read an earlier draft of hers a month ago. The wording of the OP crosses a few of my personal lines such that I would not publish it. And it’s actually surprisingly accurate to say that the key thing I’d be doing if I were editing the OP would be turning it from things that had a hint of being like a conflict theory (aren’t people with power bad!) to things that felt like a mistake theory (here’s an interesting mechanism where you might mistakenly allocate responsibility). Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
And if I were writing publicly about any topics where I had conflict theories, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. When I get really irritated with someone’s position and have a conflict theory about the source of the disagreement, I still write mistake-theory posts like this, a post with no mention of the original source of motivation.
I think that one of the things that’s most prominent to me on the current margin is that I feel like there are massive blockers on public discourse, stopping people from saying or writing anything, and I have a model whereby telling people who write things like the OP to do more work to make it all definitely mistake theory (which is indeed a standard I hold myself to) will not improve the current public discourse, but on the current margin simply stop public discourse. I feel similarly about Jessicata’s post on AI timelines, where it is likely to me that the main outcome has been quite positive—even though I think I disagree with each of the three arguments in the post and its conclusion—because the current alternative is almost literally zero public conversation about plans for long AI timelines. I already am noticing personal benefits from the discourse on the subject.
In the first half of this comment I kept arguing against the position “We should ban all conflict theories” rather than “Conflict theories are the mind-killer” which are two very different claims and only one of which you’ve been making. Right now I want to defend people’s ability to write down their thoughts in public, and I think the OP is strongly worth publishing in the situation we’re in. I could imagine a world where there was loads of great discussion of topics like what the OP is about, where the OP stands out as not having met a higher standard of effort to avoid mind-killing anyone that the other posts have, where I’d go “this is unnecessarily likely to make people feel defensive and like there’s subtle tribal politics underpinning its conclusions, consider these changes?” but right now I’m very pro “Cool idea, let me share my thoughts on the subject too.”
(Some background: The OP was discussed about 2 weeks ago on Elizabeth’s FB wall, and in it someone else was proposing a different reason why this post needed re-writing for PR reasons, and there I argued already that they shouldn’t put such high bars to writing things on people. I think that person’s specific suggestion, if taken seriously, would be incredibly harmful to public discourse regardless of its current health, whereas in this case I think your literal claims are just right. Regardless, I am strongly pro the post and others like it being published.)
Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren’t trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
Is “well-known” good enough here, or do you actually need common knowledge?
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
If Bob deceives me,
I desire to believe that Bob deceives me;
If Bob does not deceive me,
I desire to believe that Bob does not deceive me;
Let me not become attached to beliefs I may not want.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does.
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
enacting conflict in the course of discussing conflict
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.
This seems like dramatically over-complicating the idea. I would expect a prototypical conflict theorist to reason like this:
Political debates have winners and losers—if a consensus is reached on a political question, one group of people will be materially better off and another group will be worse off.
Public choice theory makes black people worse off. (I don’t know if the article is right about this, but I’ll assume it’s true for the sake of argument.)
Therefore, one ought to promote public choice theory if one wants to hurt black people, and disparage public choice theory if one wants to help black people.
This explanation loses predictive power compared to the explanation I gave above. In particular, if we think of conflict theory as “bad things happen because of bad people”, then it makes sense why conflict theorists would think public choice theory makes black people worse off, rather than better off. In your explanation, we need that as an additional assumption.
I don’t think it’s useful to talk about ‘conflict theory’, i.e. as a general theory of disagreement. It’s more useful in a form like ‘Marxism is a conflict theory’.
And then a ‘conflict theorist’ is someone who, in some context, believes a conflict theory, but not that disagreements generally are due to conflict (let alone in all contexts).
So, from the perspective of a ‘working class versus capital class’ conflict theory, public choice theory is obviously a weapon used by the capital class against the working class. But other possible conflict theories might be neutral about public choice theory.
Maybe what makes ‘conflict theory’ seem like a single thing is the prevalence of Marxism-like political philosophies.
This example looks like yet another instance of conflict theory imputing bad motives where they don’t exist and generally leading you wrong.
A large part of this example relies on “Buchanan having racist political agenda and using public choice theory as a vehicle for achieving this agenda” being a true proposition. I can not assign a high degree of credibility to this proposition though, considering Buchanan is the same guy who wrote this:
“Given the state monopoly as it exists, I surely support the introduction of vouchers. And I do support the state financing of vouchers from general tax revenues. However, although I know the evils of state monopoly, I would also want, somehow, to avoid the evils of race-class-cultural segregation that an unregulated voucher scheme might introduce. In principle, there is, after all, much in the ”melting pot“ notion of America. And there is also some merit in the notion that the education of all children should be a commonly shared experience in terms of basic curriculum, etc. We should not want a voucher scheme to reintroduce the elite that qualified for membership only because they have taken Latin and Greek classics. Ideally, and in principle, it should be possible to secure the beneficial effects of competition, in providing education, via voucher support, and at the same time to secure the potential benefits of commonly shared experiences, including exposure to other races, classes and cultures. In practise, we may not be able to accomplish the latter at all. But my main point is, I guess, to warn against dismissing the comprehensive school arguments out of hand too readily. ”
Talk is cheap, especially when claiming not to hold opinions widely considered blameworthy.
Buchanan’s academic career (and therefore ability to get our attention) can easily depend on racists’ appetite for convenient arguments regardless of his personal preferences.
I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives).
Why not integrate both perspectives: people make genuine mistakes due to cognitive limitations, and they also genuinely have different values that are in conflict with each other, and the right way to frame these problems is “bargaining by bounded rationalists” where “bargaining” can include negotiation, politics, and war. (I made a 2012 post suggesting this frame, but maybe should have given it a catchy name...)
Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.
(I wrote the above before seeing this part.) I guess “mechanism design” is similar to “bargaining by bounded rationalists” so you seem to have reached a similar conclusion, but “mechanism design” kind of assumes there’s a disinterested third party who has the power to impose a “mechanism” that is designed to be socially optimal, but often you’re one of the involved parties and “bargaining” is a more general framing that also makes sense in that case.
If your concern is that this is evidence that the OP is wrong (since it has conflict-theoretic components, which are mindkillers), it seems important to establish that there are important false object-level claims, not just things that make such mistakes likely. If you can’t do that, maybe change your mind about how much conflict theory introduces mistakes?
If you’re just arguing that laying out such models are likely to have bad consequences for readers, this is an important risk to track, but it’s also changing the subject from the question of whether the OP’s models do a good job explaining the data.
This is a really good point and a great distinction to make.
As an example, suppose I hear a claim that some terrorist group likes to eat babies. Such a claim may very well be true. On the other hand, it’s the sort of claim which I would expect to hear even in cases where it isn’t true. In general, I expect claims of the form “<enemy> is/wants/does <evil thing>”, regardless of whether those claims have any basis.
Now, clearly looking into the claim is an all-around solid solution, but it’s also an expensive solution—it takes time and effort. So, a reasonable question to ask is: should the burden of proof be on writer or critic? One could imagine a community norm where that sort of statement needs to come with a citation, or a community norm where it’s the commenters’ job to prove it wrong. I don’t think either of those standards are a good idea, because both of them require the expensive work to be done. There’s a correct Bayesian update whether or not the work of finding a citation is done, and community norms should work reasonably well whether or not the work is done.
A norm which makes more sense to me: there’s nothing wrong with writers occasionally dropping conflict-theory-esque claims. But readers should be suspicious of such claims a-priori, and just as it’s reasonable for authors to make the claim without citation, it’s reasonable for readers to question the claim on a-priori grounds. It makes sense to say “I haven’t specifically looked into whether <enemy> wants <evil thing>, but that sounds suspicious a-priori.”
This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don’t remember how long it’s been, and it’s since been deleted):
A: [proposal of personality sorting system.]
B: [statement/argument that personality sorting systems are typically useless-to-harmful]
A: but this doesn’t respond to my particular personality system.
I’m sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it’s unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of “category X is useless-to-harmful”, you need to have a system for identifying them
[I’m going to keep talking about this in the MTG case because I think a specific case is easier to read that “category X”, and it’s less loaded for me than talking about my own piece, if the correspondences aren’t obvious let me know and I can clarify]
A partial solution would be for B to outline not only why they’re skeptical of personality systems, but why, and what specific things would increase their estimation of a particular system. This is a lot to ask, which is a tax on this particular form of criticism. But if the problem is as described there’s a lot of utility in writing it up once, well, and linking to it as necessary.
@johnswentworth, if you’re up for it I think for this and other reasons there’s a lot of value in doing a full post on your general principle (with a link to this discussion). People clearly want to talk about it, and it seems valuable for it to have its own, easily-discoverable, space instead of being hidden behind my post. I would also like to resolve the general principle before discussing how to apply it to this post, which is one reason I’ve held back on participating in this sub-thread.
I probably won’t get to that soon, but I’ll put it on the list.
I also want to say that I’m sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you’re a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.
Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?
It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?
By “enemy” I meant the hypothetical terrorist in the “some terrorist group likes to eat babies” example.
I’m very confused about what you’re perceiving here, so I think some very severe miscommunication has occurred. Did you accidentally respond to a different comment than you thought?
I do think that I tend to update downwards on the likelihood of a piece being true if it seems to have obvious alternative generators for how it was constructed that are unlikely to be very truth tracking. Obvious examples here are advertisements and political campaign speeches.
I do think in that sense I think it’s reasonable to distrust pieces of writing that seem like they are part of some broader conflict, and as such are unlikely to be generated in anything close to an unbiased way. A lot of conflict-theory-heavy pieces tend to be part of some conflict, since accusing your enemies of being evil is memetic warfare 101.
I am not sure (yet) what the norms for discussion around these kinds of updates should be though, but did want to bring up that there exist some valid bayesian inferences here.
The whole question of the essay is basically “who should we be angry at”?
While the post has a few sentences about moral blame, the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead (and hiding this from the powerful people). This is a denotative statement that can be evaluated independent of “who should we be angry at”.
Such denotative statements are very useful when considering different mechanisms for resolving principal-agent problems. Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases “who we should be angry at” if that’s the best available implementation.
Mechanism design is, to a large extent, a conflict theory
I would say that mechanism design is how mistake theorists respond to situations where conflict theory is relevant—i.e., where there really is a “bad guy”. Mechanism design is not about “what consequences should happen to different agents”, it’s about designing a system to achieve a goal using unaligned agents—“consequences” are just one tool in the tool box, and mechanism design (and mistake theory) is perfectly happy to use other tools as well.
the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead … This is a denotative statement that can be evaluated independent of “who should we be angry at”.
There’s certainly a denotative idea in the OP which could potentially be useful. On the other hand, saying “the post has a few sentences about moral blame” seems like a serious understatement of the extent to which the OP is about who to be angry at.
in some cases “who we should be angry at” if that’s the best available implementation
The OP didn’t talk about any other possible implementations, which is part of why it smells like conflict theory. Framing it through principal-agent problems would at least have immediately suggested others.
Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases “who we should be angry at” if that’s the best available implementation.
“Conflict theory” is specifically about the meaning of speech acts. This not the general question of conflicting interests. The question of conflict vs mistake theory is fundamentally, what are we doing when we talk? Are we fighting over the exact location of a contested border, or trying to refine our compression of information to better empower us to reason about things we care about?
Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.
Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.
Part of what seems strange about drawing the line at denotative vs. enactive speech is that there are conflict theorists who can speak coherently/articulately in a denotative fashion (about conflict), e.g.:
Clausewitz’s On War (“war is the continuation of politics by other means”)
It seems both coherent and consistent with conflict theory to believe “some speech is denotative and some speech is enacting conflict.”
(I do see a sense in which mechanism design is a mistake theory, in that it assumes that deliberation over the mechanism is possible and desirable; however, once the mechanism is in place, it assumes agents never make mistakes, and differences in action are due to differences in values)
I don’t quite draw the line at denotative vs enactive speech—command languages which are not themselves contested would fit into neither “conflict theory” nor “mistake theory.”
“War is the continuation of politics by other means” is a very different statement than its converse, that politics is a kind of war. Clausewitz is talking about states with specific, coherent policy goals, achieving those goals through military force, in a context where there’s comparatively little pretext of a shared discourse. This is very different from the kind of situation described in Rao where a war is being fought in the domain of ostensibly “civilian” signal processing.
I’m not sure I endorse this comment as written, but just wanted to note that I appreciate trying to tease out why the article felt subtly off to you.
Something about framing it through mistake theory stills feels off to me, too, though. I see where you’re coming from with the naive-conflict-theory feeling off. But something important about the article seemed to be grappling with (or at least, I was grappling with as I read the article, and especially through the lens of your comment) was something like:
“We have a bunch of naive intuitions about who to blame. Those naive intuitions get weird in sufficiently complex systems, and it’s not obvious what to do. One thing you might do is discard the blame concept. But, this feels a bit unsatisfying because many people are still playing the blame game, and directing the blame at someone, and it’s rarely the privileged people who were able to purchase distance from the blameworthy things. And maybe the solution here is to get everyone out of conflict theory, but it’s not obvious to me that this is a tractable or even optimal-given-buy-in approach, because people in fact do fight over things.” [edit: and jessicata’s note that incentive alignment is conflict theory feels relevant]
Something about this piece felt off to me, like I couldn’t see anything specifically wrong with it but still had a strong instinctive prior that lots of things were wrong.
After thinking about it for a bit, I think my main heuristic is: this whole piece sounds like it’s built on a conflict-theory worldview. The whole question of the essay is basically “who should we be angry at”? Based on that, I’d expect that many or most of the individual examples are probably inaccurately understood or poorly analyzed. Lark’s comment about the Wells Fargo case confirms that instinct for one of the examples.
Then I started thinking about the “conflict theory = predictably wrong” heuristic. We say “politics is the mindkiller”, but I don’t think that’s quite right—people have plenty of intelligent discussions about policy, even when those discussions inherently involve politics. “Tribalism is the mindkiller” is another obvious formulation, but I’d also propose “conflict theory is the mindkiller”. Models like “arguments are soldiers” or “our enemies are evil” are the core of Yudkowsky’s original argument for viewing politics as a mind-killer. But these sort of models are essentially synonymous with conflict theory; if we could somehow have a tribalistic or political discussion without those conflict-theoretic elements, I’d expect it wouldn’t be so mindkiller-ish.
Looping back to the main topic of the OP: what would be a more mistake-theoretic way to view the same examples? One theme that jumps out to me is principal-agent problems: when something is outsourced, it’s hard to align incentives. That topic has a whole literature in game theory, and I imagine more useful insight could be had by thinking about how it applies to the examples above, rather than thinking about “moral culpability”—a.k.a. who to be angry at.
I changed my mind about conflict/mistake theory recently, after thinking about Scott’s comments on Zvi’s post. I previously thought that people were either conflict theorists or mistake theorists. But I now do not use it to label people, but instead to label individual theories.
To point to a very public example, I don’t think Sam Harris is a conflict theorist or a mistake theorist, but instead uses different theories to explain different disagreements. I think Sam Harris views any disagreements with people like Stephen Pinker or Daniel Dennett as primarily them making reasoning mistakes, or otherwise failing to notice strong arguments against their position. And I think that Sam Harris views his disagreements with people like <quickly googles Sam Harris controversies> Glenn Greenwald and Ezra Klein as primarily them attacking him for pushing different goals to their tribes.
I previously felt some not-insubstantial pull to pick sides in the conflict vs mistake theorist tribes, but I don’t actually think this is a helpful way of talking, not least because I think that sometimes I will build a mistake theory for why a project failed, and sometimes I will build a conflict theory.
To push back on this part:
“Arguments are soldiers” and “our enemies are evil” are not imaginary phenomena, they exist and people use such ideas regularly, and it’s important that I don’t prevent myself from describing reality accurately when this happens. I should be able to use a conflict theory.
I have a model of a common type of disagreement where people get angry at someone walking in with a mistake theory that goes like this: Alice has some power over Bob, and kinda self-deceives themselves into a situation where it’s right for them to take resources from Bob, and as Bob is getting angry at Alice and tries to form a small political force to punish Alice, then Charlie comes along and is like “No you don’t understand, Alice just made an error of reasoning and if I explain this to them they won’t make that mistake again!” and Bob gets really angry at Charlie and thinks they’re maybe trying to secretly help Alice or else are strikingly oblivious / conflict averse to an unhealthy degree. (Note this is a mistake theory about the disagreement between Bob and Charlie, and a conflict theory about the disagreement between Bob and Alice. And Charlie is wrong to use a mistake theory.)
I think the reason I’m tempted to split mistake and conflict into tribes, is because I do know people that largely fit into one or the other. I knew people at school who always viewed interpersonal conflict as emanating from tribal self-interest, and would view my attempt to show a solution that didn’t require someone being at fault as me trying to make them submit to some kinda weird technicality, and got justifiably irritated. I also know people who are very conflict averse but also have an understanding of the complexity of reality, and so always assume it is merely a principal-agent problem or information flow problem, as opposed to going “Yeah, Alice is just acting out of self-interest here, we need to let her know that’s not okay, and let’s not obfuscate this unnecessarily.” But I think the goal is to have one’s beliefs correspond to reality—to use a conflict theory when that’s true, a mistake theory when that’s true, and not pre-commit to one side or the other regardless of how reality actually is.
I do think that conflict theories are often pretty derailing to bring up when trying to have a meaningful 1-1 public debate, and that it’s good to think carefully about specific norms for how to do such a thing. I do think that straight-up banning them is likely the wrong move though. Well, I think that there are many places where they have no place, such as a math journal. However the mathematical community will need a place to be able to discuss internal politics + norm-violations where these can be raised.
I think the whole “mistake theory vs conflict theory” thing needs to be examined and explained in greater detail, because there is a lot of potential to get confused about things (at least for me). For example:
Both “mistake statements” and “conflict statements” can be held sincerely, or can be lies strategically used against an enemy. For example, I may genuinely believe that X is racist, and then I would desire to make people aware of a danger X poses. The fact that I do not waste time explaining and examining specific details of X’s beliefs is simply because time is a scarce resource, and warning people against a dangerous person is a priority. Or, I may knowingly falsely accuse X of being racist, because I assume that gives me higher probability of winning the tribal fight, compared to a honest debate about our opinions. (Note: The fact that I assume my opponent would win a debate doesn’t necessarily imply that I believe he it right. Maybe his opinions are simply more viral; more compatible with existing biases and prejudices of listeners.) Same goes for the mistake theory: I can sincerely explain how most people are not evil and yet Moloch devours everything; or I may be perfectly aware that the people of my tribe are at this moment fighting for our selfish collective interest, and yet present an ad-hoc theory to confuse the nerds of the opposing tribe into inaction.
Plus, there is always a gray zone between knowingly lying and beliefs sincerely held. Unconscious biases, plausible deniability, all this “this person seems to be genuinely mistaken, but at the same time they resist all attempts to explain” which seems to be the behavior of most people most of the time. This balancing at “aware on some level, but unaware on another level” which allows us to navigate towards achieving our selfish goals while maintaining the image of innocence (including the self-image).
Then, we have different levels of meta. For example, suppose that Alice takes Bob’s apple and eats it. This is a factual description. On the first level, Charlie the conflict theorist might say “she knowingly stole the apple”, while Diana the mistake theorist might say “she just made a mistake and believed the apple was actually hers”. Now on the second level, a conflict theorist could say “of course Charlie accuses Alice of acting badly; he is a misogynist” (conflict explanation of conflict explanation), or “of course Diana would defend Alice; women have a strong in-group bias” (conflict explanation of mistake explanation). A mistake theorist could say “Charlie is a victim of illusion of transparency, just because he noticed the apple belongs to Bob, doesn’t mean Alice had to notice it, too” (mistake explanation of conflict explanation), or “Diana seems to be a nice person who would never steal, and she projects her attitude on Alice” (mistake explanation of mistake explanation). On the third level… well, it gets complicated quickly. And yet, people make models of each other, and make models of models other people have about them, so the higher levels will get constructed.
By the way, notice that “mistake theorists” and “conflict theorists” are not two opposing tribes, in the sense of tribal conflict. The same political tribe may contain both of them: some people believe their opponents are evil, others believe they are making a tragic mistake; both believe the opponents have to be stopped, by force if necessary. There may be conflict theorists on both sides: both explaining why the other side is making a power grab and needs to be stopped; or mistake theorists on both sides: both explaining why the other side is deluded.
...and I feel pretty sure there are other complications that I forgot at the moment.
EDIT:
For example, the conflict theory can be expressed in a mistake-theory lingo. Instead of saying “my evil opponent is just trying to get more power”, say “my uneducated opponent is unaware of his unconscious biases that make him believe that things that get him more power are the right ones”. You accused him of pretty much the same thing, but it makes your statement acceptable among mistake theorists.
I might be missing the forest for the trees, but all of those still feel like they end up making some kinds of predictions based on the model, even if they’re not trivial to test. Something like:
If Alice were informed by some neutral party that she took Bob’s apple, Charlie would predict that she would not show meaningful remorse or try to make up for the damage done beyond trivial gestures like an off-hand “sorry” as well as claiming that some other minor extraction of resources is likely to follow, while Diana would predict that Alice would treat her overreach more seriously when informed of it. Something similar can be done on the meta-level.
None of these are slamdunks, and there are a bunch of reasons why the predictions might turn out exactly as laid out by Charlie or Diana, but that just feels like how Bayesian cookies crumble, and I would definitely expect evidence to accumulate over time in one direction or the other.
Strong opinion weakly held: it feels like an iterated version of this prediction-making and tracking over time is how our native bad actor detection algorithms function. It seems to me that shining more light on this mechanism would be good.
After reading this and the comments you linked, I think people mean several different things by conflict/mistake theory.
I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives). I see mechanism design as the prototypical mistake theory approach: if people are misaligned, then restructure the system to align their incentives. It’s a technical problem, and getting angry at people is usually unhelpful.
In the comment thread you linked, Scott characterizes conflict theory as “the main driver of disagreement is self-interest rather than honest mistakes”. That view matches up more with the example you give: the mistake theorist assumes that people have “good” intent, and if you just explain that their actions are harmful, then they’ll stop. Under this interpretation, mechanism design is conflict-theory-flavored; it’s thinking of people as self-interested and then trying to align them anyway.
(I think part of the confusion is that some people are coming in with the assumption that acting in self-interest is automatically bad, and others are coming in with more of an economic/game theory mindset. Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.)
So I guess one good question to think about is: how do we categorize mechanism design? Is it conflict, is it mistake, is it something else? Different answers correspond to different interpretations of what “conflict” and “mistake” theory mean. I’m pretty sure my interpretation is a much better fit to the examples and explanations in Scott’s original post on the topic, and it seems like a natural categorization to me. On the other hand, it also seems like there’s another natural category of naive-mistake-theorists who just assume honest mistakes, as in your Bob-Charlie example, and apparently some people are using the terms to capture that category.
Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.
I don’t share this intuition. The Baffler article argues:
The overall argument is something like:
Calhoun and Buchanan both had racist agendas (maintaining slavery and segregation). (They may have these agendas due to some combination of personal self-interest and class self-interest)
They promoted ideas about democratic governance (e.g. that majority rule is insufficient) that were largely motivated by these agendas.
These ideas are largely the same as the ones of public choice theory (as pointed out by Cowen and Tabarrok)
Therefore, it is likely that public choice theory is advancing a racist agenda, and continues being advocated partially for this reason.
Overall, this is an argument that personal self-interest, or class self-interest, are driving the promotion of public choice theory. (Such interests and their implications could be studied within economics; though, economics typically avoids discussing group interests except in the context of discrete organizational units such as firms)
Another way of looking at this is:
Economics, mechanism design, public choice theory, etc are meta-level theories about how to handle conflicts of interest.
It would be desirable to have agreement on good meta-level principles in order to resolve object-level conflicts.
However, the choice of meta-level principles (and, the mapping between those principles and reality) is often itself political or politicized.
Therefore, there will be conflicts over these meta-level principles.
Let’s imagine for a minute that we didn’t know any of the background, and just think about what we might have predicted ahead of time.
Frame 1: conflict theory is characterized by the idea that problems mostly come from people following their own self-interest. Not knowing anything else, what do we expect conflict theorists to think about public choice theory—a theory whose central premise is modeling public servants as following their own self-interests/incentives? Like, the third sentence of the wikipedia article is “it is the subset of positive political theory that studies self-interested agents (voters, politicians, bureaucrats) and their interactions”.
If conflict theory is about problems stemming from people following their self-interest, public choice theory ought to be right up the conflict theorist’s alley. This whole “meta-level conflict” thing sounds like a rather contrived post-hoc explanation; a-priori there doesn’t seem to be much reason for all this meta stuff. And conflict theorists in practice seem to be awfully selective about when to go meta, in a way that we wouldn’t predict just based on “problems mostly stem from people following their self-interest”.
On the other hand...
Frame 2: conflict theory is characterized by the idea that bad things mostly happen because of bad people, and the solution is to punish them. In this frame, what would we expect conflict theorists to think of public choice theory?
Well, we’d expect them to dismiss it as obviously wrong—it doesn’t denounce any bad people—and therefore also probably an attempt by bad people to steer things the way they want.
If conflict theory is characterized by “bad things happen because of bad people”, then an article about how racism secretly underlies public choice theory is exactly the sort of thing we’d predict.
I think it’s a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here). You can also see the discussion on No, it’s not The Incentives—it’s you as a disagreement over where this boundary should be.
That said, one thing I’ll point out is that explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It’s saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.
I find for myself that thinking ‘conflict theorists’ is a single coherent group is confusing me, and that I should instead replace the symbol with the substance when I’m tempted to use it, because there are many types of people who sometimes use conflict theories, and it is confusing to lump them in with people who always use them, because they often have different reasons for using them when they do.
To give one example of people who always use it: there are certain people who have for most of their lives found that the main determinant of outcomes for them is political conflict by people above them, who are only really able to understand the world using theories of conflict. They’ve also never gained a real understanding of any of the fascinating and useful different explanations for how social reality works (example, example), or a sense that you often can expand massively rather than fight over existing resources. And when they’re looking at someone bringing in public choice theory to argue one side of a social fight, they get an impression that the person is finding clever arguments for their position, rather than being honest.
(This is a mistake theory of why some people primarily reason using conflict theories. There are conflict theories that explain it as well.)
I think it’s good to be able to describe what such people are doing, and what experiences have lead them to that outlook on life. But I also think that there are many reasons for holding a conflict theory about a situation, and these people are not at all the only examples of people who use such theories regularly.
Added: clone of saturn’s 3 point explanation seems right to me.
I get what you’re saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.
None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one’s own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”. We can explain other peoples’ positions by saying they’re using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default—even if their arguments happen to be correct.
As you say, explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.
Even after all this discussion of theories vs theorists, “conflict theory = predictably wrong” still seems like a solid heuristic.
Sorry for the delay, a lot has happened in the last week.
Let me point to where I disagree with you.
My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to think and talk freely about what is actually happening.
A Harvard professor of social science arguing that replications are disrespectful and should be assumed false.
Physics academia writing an attack-piece on a non-academic after he presented a novel theory of fundamental physics in a lecture series at Oxford.
Many of Robin Hanson’s great hypotheses, like politics isn’t about policy,inequality talk is about grabbing andtoo much consulting?.
Things that went down with SlateStarCodex and discussion of the culture war.
Sam Harris and his aggressive clashes with people like Ezra Klein and Glenn Greenwald.
There’s something very valuable that you’re pointing at, and I agree with a lot of it. There shouldn’t be conflict theories in a math journal. It’s plausible to me there shouldn’t be conflict theories in an economics journal. And it’s plausible to me that the goal should be for the frontpage of LessWrong to be safe from them too, because they do bring major costs in terms of mindkilling nature, and furthermore because several of the above are bullet points are simply off-topic for LessWrong. We’re not here to discuss current-day tribal politics in various institutions, industries and communities.
And if I were writing publicly about any of the above topics, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. Rather than calling them biased or self-interested, I prefer to use the most powerful of rebuttals in the pursuit of truth, which is showing that they’re wrong.
But ruling it out wholly in one’s discourse and life seems way too much. I think there are cases where wholly censoring conflict theories will be far more cost than it’s worth, and that removing them entirely from your discourse will cripple you and allow you to be taken over by outside forces that want your resources.
For example, I can imagine a relatively straightforward implementation of “no conflict theories” in a nearby world meaning that I am not able to say that study after study is suspect, or that a position is being pushed by political actors, unless I first reinvent mechanism theory and a bunch of microeconomics and a large amount of technical language to discuss bias. If I assume the worst about all of the above bullet points, not being able to talk about bad people causing bad things could mean we are forced to believe lots of false study results and ignore a new theory of fundamental physics, plus silence economists, bloggers, and public intellectuals.
The Hanson examples above feel the strongest to me because it’s the one that’s a central example of something that’s able to lead to a universal, deep insight about reality and be a central part of LessWrong’s mission in understanding human rationality, whereas the others are mostly about current tribal politics. But I think they all substantially affect how much to trust our info sources.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
***
Hmm.
I re-read the OP, and realise I actually identify a lot with your initial comment, and that I gave Elizabeth similar feedback when I read an earlier draft of hers a month ago. The wording of the OP crosses a few of my personal lines such that I would not publish it. And it’s actually surprisingly accurate to say that the key thing I’d be doing if I were editing the OP would be turning it from things that had a hint of being like a conflict theory (aren’t people with power bad!) to things that felt like a mistake theory (here’s an interesting mechanism where you might mistakenly allocate responsibility). Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
And if I were writing publicly about any topics where I had conflict theories, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. When I get really irritated with someone’s position and have a conflict theory about the source of the disagreement, I still write mistake-theory posts like this, a post with no mention of the original source of motivation.
I think that one of the things that’s most prominent to me on the current margin is that I feel like there are massive blockers on public discourse, stopping people from saying or writing anything, and I have a model whereby telling people who write things like the OP to do more work to make it all definitely mistake theory (which is indeed a standard I hold myself to) will not improve the current public discourse, but on the current margin simply stop public discourse. I feel similarly about Jessicata’s post on AI timelines, where it is likely to me that the main outcome has been quite positive—even though I think I disagree with each of the three arguments in the post and its conclusion—because the current alternative is almost literally zero public conversation about plans for long AI timelines. I already am noticing personal benefits from the discourse on the subject.
In the first half of this comment I kept arguing against the position “We should ban all conflict theories” rather than “Conflict theories are the mind-killer” which are two very different claims and only one of which you’ve been making. Right now I want to defend people’s ability to write down their thoughts in public, and I think the OP is strongly worth publishing in the situation we’re in. I could imagine a world where there was loads of great discussion of topics like what the OP is about, where the OP stands out as not having met a higher standard of effort to avoid mind-killing anyone that the other posts have, where I’d go “this is unnecessarily likely to make people feel defensive and like there’s subtle tribal politics underpinning its conclusions, consider these changes?” but right now I’m very pro “Cool idea, let me share my thoughts on the subject too.”
(Some background: The OP was discussed about 2 weeks ago on Elizabeth’s FB wall, and in it someone else was proposing a different reason why this post needed re-writing for PR reasons, and there I argued already that they shouldn’t put such high bars to writing things on people. I think that person’s specific suggestion, if taken seriously, would be incredibly harmful to public discourse regardless of its current health, whereas in this case I think your literal claims are just right. Regardless, I am strongly pro the post and others like it being published.)
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren’t trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.
(You said this elsewhere in the thread: “the goal is to have one’s beliefs correspond to reality—to use a conflict theory when that’s true, a mistake theory when that’s true”.)
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
Consider a situation where:
People are discussing phenomenon X.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
This is a very helpful comment, thank you!
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.
I generally endorse this line of reasoning.
Nice :-)
This seems like dramatically over-complicating the idea. I would expect a prototypical conflict theorist to reason like this:
Political debates have winners and losers—if a consensus is reached on a political question, one group of people will be materially better off and another group will be worse off.
Public choice theory makes black people worse off. (I don’t know if the article is right about this, but I’ll assume it’s true for the sake of argument.)
Therefore, one ought to promote public choice theory if one wants to hurt black people, and disparage public choice theory if one wants to help black people.
This explanation loses predictive power compared to the explanation I gave above. In particular, if we think of conflict theory as “bad things happen because of bad people”, then it makes sense why conflict theorists would think public choice theory makes black people worse off, rather than better off. In your explanation, we need that as an additional assumption.
I don’t think it’s useful to talk about ‘conflict theory’, i.e. as a general theory of disagreement. It’s more useful in a form like ‘Marxism is a conflict theory’.
And then a ‘conflict theorist’ is someone who, in some context, believes a conflict theory, but not that disagreements generally are due to conflict (let alone in all contexts).
So, from the perspective of a ‘working class versus capital class’ conflict theory, public choice theory is obviously a weapon used by the capital class against the working class. But other possible conflict theories might be neutral about public choice theory.
Maybe what makes ‘conflict theory’ seem like a single thing is the prevalence of Marxism-like political philosophies.
This example looks like yet another instance of conflict theory imputing bad motives where they don’t exist and generally leading you wrong.
A large part of this example relies on “Buchanan having racist political agenda and using public choice theory as a vehicle for achieving this agenda” being a true proposition. I can not assign a high degree of credibility to this proposition though, considering Buchanan is the same guy who wrote this:
“Given the state monopoly as it exists, I surely support the introduction of vouchers. And I do support the state financing of vouchers from general tax revenues. However, although I know the evils of state monopoly, I would also want, somehow, to avoid the evils of race-class-cultural segregation that an unregulated voucher scheme might introduce. In principle, there is, after all, much in the ”melting pot“ notion of America. And there is also some merit in the notion that the education of all children should be a commonly shared experience in terms of basic curriculum, etc. We should not want a voucher scheme to reintroduce the elite that qualified for membership only because they have taken Latin and Greek classics. Ideally, and in principle, it should be possible to secure the beneficial effects of competition, in providing education, via voucher support, and at the same time to secure the potential benefits of commonly shared experiences, including exposure to other races, classes and cultures. In practise, we may not be able to accomplish the latter at all. But my main point is, I guess, to warn against dismissing the comprehensive school arguments out of hand too readily. ”
Source: http://www.independent.org/issues/article.asp?id=9115
Talk is cheap, especially when claiming not to hold opinions widely considered blameworthy.
Buchanan’s academic career (and therefore ability to get our attention) can easily depend on racists’ appetite for convenient arguments regardless of his personal preferences.
Why not integrate both perspectives: people make genuine mistakes due to cognitive limitations, and they also genuinely have different values that are in conflict with each other, and the right way to frame these problems is “bargaining by bounded rationalists” where “bargaining” can include negotiation, politics, and war. (I made a 2012 post suggesting this frame, but maybe should have given it a catchy name...)
(I wrote the above before seeing this part.) I guess “mechanism design” is similar to “bargaining by bounded rationalists” so you seem to have reached a similar conclusion, but “mechanism design” kind of assumes there’s a disinterested third party who has the power to impose a “mechanism” that is designed to be socially optimal, but often you’re one of the involved parties and “bargaining” is a more general framing that also makes sense in that case.
You mean, we mistake theorists are not in perpetual conflict with conflict theorists, they are just making a mistake? O_o
If your concern is that this is evidence that the OP is wrong (since it has conflict-theoretic components, which are mindkillers), it seems important to establish that there are important false object-level claims, not just things that make such mistakes likely. If you can’t do that, maybe change your mind about how much conflict theory introduces mistakes?
If you’re just arguing that laying out such models are likely to have bad consequences for readers, this is an important risk to track, but it’s also changing the subject from the question of whether the OP’s models do a good job explaining the data.
This is a really good point and a great distinction to make.
As an example, suppose I hear a claim that some terrorist group likes to eat babies. Such a claim may very well be true. On the other hand, it’s the sort of claim which I would expect to hear even in cases where it isn’t true. In general, I expect claims of the form “<enemy> is/wants/does <evil thing>”, regardless of whether those claims have any basis.
Now, clearly looking into the claim is an all-around solid solution, but it’s also an expensive solution—it takes time and effort. So, a reasonable question to ask is: should the burden of proof be on writer or critic? One could imagine a community norm where that sort of statement needs to come with a citation, or a community norm where it’s the commenters’ job to prove it wrong. I don’t think either of those standards are a good idea, because both of them require the expensive work to be done. There’s a correct Bayesian update whether or not the work of finding a citation is done, and community norms should work reasonably well whether or not the work is done.
A norm which makes more sense to me: there’s nothing wrong with writers occasionally dropping conflict-theory-esque claims. But readers should be suspicious of such claims a-priori, and just as it’s reasonable for authors to make the claim without citation, it’s reasonable for readers to question the claim on a-priori grounds. It makes sense to say “I haven’t specifically looked into whether <enemy> wants <evil thing>, but that sounds suspicious a-priori.”
This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don’t remember how long it’s been, and it’s since been deleted):
A: [proposal of personality sorting system.]
B: [statement/argument that personality sorting systems are typically useless-to-harmful]
A: but this doesn’t respond to my particular personality system.
I’m sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it’s unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of “category X is useless-to-harmful”, you need to have a system for identifying them
[I’m going to keep talking about this in the MTG case because I think a specific case is easier to read that “category X”, and it’s less loaded for me than talking about my own piece, if the correspondences aren’t obvious let me know and I can clarify]
A partial solution would be for B to outline not only why they’re skeptical of personality systems, but why, and what specific things would increase their estimation of a particular system. This is a lot to ask, which is a tax on this particular form of criticism. But if the problem is as described there’s a lot of utility in writing it up once, well, and linking to it as necessary.
@johnswentworth, if you’re up for it I think for this and other reasons there’s a lot of value in doing a full post on your general principle (with a link to this discussion). People clearly want to talk about it, and it seems valuable for it to have its own, easily-discoverable, space instead of being hidden behind my post. I would also like to resolve the general principle before discussing how to apply it to this post, which is one reason I’ve held back on participating in this sub-thread.
I probably won’t get to that soon, but I’ll put it on the list.
I also want to say that I’m sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you’re a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.
Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?
It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?
By “enemy” I meant the hypothetical terrorist in the “some terrorist group likes to eat babies” example.
I’m very confused about what you’re perceiving here, so I think some very severe miscommunication has occurred. Did you accidentally respond to a different comment than you thought?
How is that relevant to the OP?
I do think that I tend to update downwards on the likelihood of a piece being true if it seems to have obvious alternative generators for how it was constructed that are unlikely to be very truth tracking. Obvious examples here are advertisements and political campaign speeches.
I do think in that sense I think it’s reasonable to distrust pieces of writing that seem like they are part of some broader conflict, and as such are unlikely to be generated in anything close to an unbiased way. A lot of conflict-theory-heavy pieces tend to be part of some conflict, since accusing your enemies of being evil is memetic warfare 101.
I am not sure (yet) what the norms for discussion around these kinds of updates should be though, but did want to bring up that there exist some valid bayesian inferences here.
While the post has a few sentences about moral blame, the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead (and hiding this from the powerful people). This is a denotative statement that can be evaluated independent of “who should we be angry at”.
Such denotative statements are very useful when considering different mechanisms for resolving principal-agent problems. Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases “who we should be angry at” if that’s the best available implementation.
I would say that mechanism design is how mistake theorists respond to situations where conflict theory is relevant—i.e., where there really is a “bad guy”. Mechanism design is not about “what consequences should happen to different agents”, it’s about designing a system to achieve a goal using unaligned agents—“consequences” are just one tool in the tool box, and mechanism design (and mistake theory) is perfectly happy to use other tools as well.
There’s certainly a denotative idea in the OP which could potentially be useful. On the other hand, saying “the post has a few sentences about moral blame” seems like a serious understatement of the extent to which the OP is about who to be angry at.
The OP didn’t talk about any other possible implementations, which is part of why it smells like conflict theory. Framing it through principal-agent problems would at least have immediately suggested others.
“Conflict theory” is specifically about the meaning of speech acts. This not the general question of conflicting interests. The question of conflict vs mistake theory is fundamentally, what are we doing when we talk? Are we fighting over the exact location of a contested border, or trying to refine our compression of information to better empower us to reason about things we care about?
Quoting Scott’s post:
Part of what seems strange about drawing the line at denotative vs. enactive speech is that there are conflict theorists who can speak coherently/articulately in a denotative fashion (about conflict), e.g.:
Clausewitz’s On War (“war is the continuation of politics by other means”)
Venkatesh Rao’s “A Quick (Battle) Field Guide to the New Culture Wars”
It seems both coherent and consistent with conflict theory to believe “some speech is denotative and some speech is enacting conflict.”
(I do see a sense in which mechanism design is a mistake theory, in that it assumes that deliberation over the mechanism is possible and desirable; however, once the mechanism is in place, it assumes agents never make mistakes, and differences in action are due to differences in values)
I don’t quite draw the line at denotative vs enactive speech—command languages which are not themselves contested would fit into neither “conflict theory” nor “mistake theory.”
“War is the continuation of politics by other means” is a very different statement than its converse, that politics is a kind of war. Clausewitz is talking about states with specific, coherent policy goals, achieving those goals through military force, in a context where there’s comparatively little pretext of a shared discourse. This is very different from the kind of situation described in Rao where a war is being fought in the domain of ostensibly “civilian” signal processing.
I’m not sure I endorse this comment as written, but just wanted to note that I appreciate trying to tease out why the article felt subtly off to you.
Something about framing it through mistake theory stills feels off to me, too, though. I see where you’re coming from with the naive-conflict-theory feeling off. But something important about the article seemed to be grappling with (or at least, I was grappling with as I read the article, and especially through the lens of your comment) was something like:
“We have a bunch of naive intuitions about who to blame. Those naive intuitions get weird in sufficiently complex systems, and it’s not obvious what to do. One thing you might do is discard the blame concept. But, this feels a bit unsatisfying because many people are still playing the blame game, and directing the blame at someone, and it’s rarely the privileged people who were able to purchase distance from the blameworthy things. And maybe the solution here is to get everyone out of conflict theory, but it’s not obvious to me that this is a tractable or even optimal-given-buy-in approach, because people in fact do fight over things.” [edit: and jessicata’s note that incentive alignment is conflict theory feels relevant]