I think it’s a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here). You can also see the discussion on No, it’s not The Incentives—it’s you as a disagreement over where this boundary should be.
That said, one thing I’ll point out is that explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It’s saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.
I find for myself that thinking ‘conflict theorists’ is a single coherent group is confusing me, and that I should instead replace the symbol with the substance when I’m tempted to use it, because there are many types of people who sometimes use conflict theories, and it is confusing to lump them in with people who always use them, because they often have different reasons for using them when they do.
To give one example of people who always use it: there are certain people who have for most of their lives found that the main determinant of outcomes for them is political conflict by people above them, who are only really able to understand the world using theories of conflict. They’ve also never gained a real understanding of any of the fascinating and useful different explanations for how social reality works (example, example), or a sense that you often can expand massively rather than fight over existing resources. And when they’re looking at someone bringing in public choice theory to argue one side of a social fight, they get an impression that the person is finding clever arguments for their position, rather than being honest.
(This is a mistake theory of why some people primarily reason using conflict theories. There are conflict theories that explain it as well.)
I think it’s good to be able to describe what such people are doing, and what experiences have lead them to that outlook on life. But I also think that there are many reasons for holding a conflict theory about a situation, and these people are not at all the only examples of people who use such theories regularly.
Added: clone of saturn’s 3 point explanation seems right to me.
I get what you’re saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.
None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one’s own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”. We can explain other peoples’ positions by saying they’re using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default—even if their arguments happen to be correct.
As you say, explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.
Even after all this discussion of theories vs theorists, “conflict theory = predictably wrong” still seems like a solid heuristic.
Sorry for the delay, a lot has happened in the last week.
Let me point to where I disagree with you.
Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”.
My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to thinkandtalkfreely about what is actually happening.
Sam Harris and his aggressive clashes with people like Ezra Klein and Glenn Greenwald.
There’s something very valuable that you’re pointing at, and I agree with a lot of it. There shouldn’t be conflict theories in a math journal. It’s plausible to me there shouldn’t be conflict theories in an economics journal. And it’s plausible to me that the goal should be for the frontpage of LessWrong to be safe from them too, because they do bring major costs in terms of mindkilling nature, and furthermore because several of the above are bullet points are simply off-topic for LessWrong. We’re not here to discuss current-day tribal politics in various institutions, industries and communities.
And if I were writing publicly about any of the above topics, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. Rather than calling them biased or self-interested, I prefer to use the most powerful of rebuttals in the pursuit of truth, which is showing that they’re wrong.
But ruling it out wholly in one’s discourse and life seems way too much. I think there are cases where wholly censoring conflict theories will be far more cost than it’s worth, and that removing them entirely from your discourse will cripple you and allow you to be taken over by outside forces that want your resources.
For example, I can imagine a relatively straightforward implementation of “no conflict theories” in a nearby world meaning that I am not able to say that study after study is suspect, or that a position is being pushed by political actors, unless I first reinvent mechanism theory and a bunch of microeconomics and a large amount of technical language to discuss bias. If I assume the worst about all of the above bullet points, not being able to talk about bad people causing bad things could mean we are forced to believe lots of false study results and ignore a new theory of fundamental physics, plus silence economists, bloggers, and public intellectuals.
The Hanson examples above feel the strongest to me because it’s the one that’s a central example of something that’s able to lead to a universal, deep insight about reality and be a central part of LessWrong’s mission in understanding human rationality, whereas the others are mostly about current tribal politics. But I think they all substantially affect how much to trust our info sources.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
***
Hmm.
I re-read the OP, and realise I actually identify a lot with your initial comment, and that I gave Elizabeth similar feedback when I read an earlier draft of hers a month ago. The wording of the OP crosses a few of my personal lines such that I would not publish it. And it’s actually surprisingly accurate to say that the key thing I’d be doing if I were editing the OP would be turning it from things that had a hint of being like a conflict theory (aren’t people with power bad!) to things that felt like a mistake theory (here’s an interesting mechanism where you might mistakenly allocate responsibility). Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
And if I were writing publicly about any topics where I had conflict theories, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. When I get really irritated with someone’s position and have a conflict theory about the source of the disagreement, I still write mistake-theory posts like this, a post with no mention of the original source of motivation.
I think that one of the things that’s most prominent to me on the current margin is that I feel like there are massive blockers on public discourse, stopping people from saying or writing anything, and I have a model whereby telling people who write things like the OP to do more work to make it all definitely mistake theory (which is indeed a standard I hold myself to) will not improve the current public discourse, but on the current margin simply stop public discourse. I feel similarly about Jessicata’s post on AI timelines, where it is likely to me that the main outcome has been quite positive—even though I think I disagree with each of the three arguments in the post and its conclusion—because the current alternative is almost literally zero public conversation about plans for long AI timelines. I already am noticing personal benefits from the discourse on the subject.
In the first half of this comment I kept arguing against the position “We should ban all conflict theories” rather than “Conflict theories are the mind-killer” which are two very different claims and only one of which you’ve been making. Right now I want to defend people’s ability to write down their thoughts in public, and I think the OP is strongly worth publishing in the situation we’re in. I could imagine a world where there was loads of great discussion of topics like what the OP is about, where the OP stands out as not having met a higher standard of effort to avoid mind-killing anyone that the other posts have, where I’d go “this is unnecessarily likely to make people feel defensive and like there’s subtle tribal politics underpinning its conclusions, consider these changes?” but right now I’m very pro “Cool idea, let me share my thoughts on the subject too.”
(Some background: The OP was discussed about 2 weeks ago on Elizabeth’s FB wall, and in it someone else was proposing a different reason why this post needed re-writing for PR reasons, and there I argued already that they shouldn’t put such high bars to writing things on people. I think that person’s specific suggestion, if taken seriously, would be incredibly harmful to public discourse regardless of its current health, whereas in this case I think your literal claims are just right. Regardless, I am strongly pro the post and others like it being published.)
Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren’t trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
Is “well-known” good enough here, or do you actually need common knowledge?
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
If Bob deceives me,
I desire to believe that Bob deceives me;
If Bob does not deceive me,
I desire to believe that Bob does not deceive me;
Let me not become attached to beliefs I may not want.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does.
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
enacting conflict in the course of discussing conflict
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.
I think it’s a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here). You can also see the discussion on No, it’s not The Incentives—it’s you as a disagreement over where this boundary should be.
That said, one thing I’ll point out is that explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It’s saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.
I find for myself that thinking ‘conflict theorists’ is a single coherent group is confusing me, and that I should instead replace the symbol with the substance when I’m tempted to use it, because there are many types of people who sometimes use conflict theories, and it is confusing to lump them in with people who always use them, because they often have different reasons for using them when they do.
To give one example of people who always use it: there are certain people who have for most of their lives found that the main determinant of outcomes for them is political conflict by people above them, who are only really able to understand the world using theories of conflict. They’ve also never gained a real understanding of any of the fascinating and useful different explanations for how social reality works (example, example), or a sense that you often can expand massively rather than fight over existing resources. And when they’re looking at someone bringing in public choice theory to argue one side of a social fight, they get an impression that the person is finding clever arguments for their position, rather than being honest.
(This is a mistake theory of why some people primarily reason using conflict theories. There are conflict theories that explain it as well.)
I think it’s good to be able to describe what such people are doing, and what experiences have lead them to that outlook on life. But I also think that there are many reasons for holding a conflict theory about a situation, and these people are not at all the only examples of people who use such theories regularly.
Added: clone of saturn’s 3 point explanation seems right to me.
I get what you’re saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.
None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one’s own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we’re talking about conflict theory in the form of “bad thing happens because of this bad person” as opposed to “this person’s incentives are misaligned”. We can explain other peoples’ positions by saying they’re using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default—even if their arguments happen to be correct.
As you say, explaining Calhoun and Buchanan’s use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.
Even after all this discussion of theories vs theorists, “conflict theory = predictably wrong” still seems like a solid heuristic.
Sorry for the delay, a lot has happened in the last week.
Let me point to where I disagree with you.
My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to think and talk freely about what is actually happening.
A Harvard professor of social science arguing that replications are disrespectful and should be assumed false.
Physics academia writing an attack-piece on a non-academic after he presented a novel theory of fundamental physics in a lecture series at Oxford.
Many of Robin Hanson’s great hypotheses, like politics isn’t about policy,inequality talk is about grabbing andtoo much consulting?.
Things that went down with SlateStarCodex and discussion of the culture war.
Sam Harris and his aggressive clashes with people like Ezra Klein and Glenn Greenwald.
There’s something very valuable that you’re pointing at, and I agree with a lot of it. There shouldn’t be conflict theories in a math journal. It’s plausible to me there shouldn’t be conflict theories in an economics journal. And it’s plausible to me that the goal should be for the frontpage of LessWrong to be safe from them too, because they do bring major costs in terms of mindkilling nature, and furthermore because several of the above are bullet points are simply off-topic for LessWrong. We’re not here to discuss current-day tribal politics in various institutions, industries and communities.
And if I were writing publicly about any of the above topics, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. Rather than calling them biased or self-interested, I prefer to use the most powerful of rebuttals in the pursuit of truth, which is showing that they’re wrong.
But ruling it out wholly in one’s discourse and life seems way too much. I think there are cases where wholly censoring conflict theories will be far more cost than it’s worth, and that removing them entirely from your discourse will cripple you and allow you to be taken over by outside forces that want your resources.
For example, I can imagine a relatively straightforward implementation of “no conflict theories” in a nearby world meaning that I am not able to say that study after study is suspect, or that a position is being pushed by political actors, unless I first reinvent mechanism theory and a bunch of microeconomics and a large amount of technical language to discuss bias. If I assume the worst about all of the above bullet points, not being able to talk about bad people causing bad things could mean we are forced to believe lots of false study results and ignore a new theory of fundamental physics, plus silence economists, bloggers, and public intellectuals.
The Hanson examples above feel the strongest to me because it’s the one that’s a central example of something that’s able to lead to a universal, deep insight about reality and be a central part of LessWrong’s mission in understanding human rationality, whereas the others are mostly about current tribal politics. But I think they all substantially affect how much to trust our info sources.
My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
***
Hmm.
I re-read the OP, and realise I actually identify a lot with your initial comment, and that I gave Elizabeth similar feedback when I read an earlier draft of hers a month ago. The wording of the OP crosses a few of my personal lines such that I would not publish it. And it’s actually surprisingly accurate to say that the key thing I’d be doing if I were editing the OP would be turning it from things that had a hint of being like a conflict theory (aren’t people with power bad!) to things that felt like a mistake theory (here’s an interesting mechanism where you might mistakenly allocate responsibility). Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.
And if I were writing publicly about any topics where I had conflict theories, I would heavily avoid bringing conflict theories—and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person’s position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. When I get really irritated with someone’s position and have a conflict theory about the source of the disagreement, I still write mistake-theory posts like this, a post with no mention of the original source of motivation.
I think that one of the things that’s most prominent to me on the current margin is that I feel like there are massive blockers on public discourse, stopping people from saying or writing anything, and I have a model whereby telling people who write things like the OP to do more work to make it all definitely mistake theory (which is indeed a standard I hold myself to) will not improve the current public discourse, but on the current margin simply stop public discourse. I feel similarly about Jessicata’s post on AI timelines, where it is likely to me that the main outcome has been quite positive—even though I think I disagree with each of the three arguments in the post and its conclusion—because the current alternative is almost literally zero public conversation about plans for long AI timelines. I already am noticing personal benefits from the discourse on the subject.
In the first half of this comment I kept arguing against the position “We should ban all conflict theories” rather than “Conflict theories are the mind-killer” which are two very different claims and only one of which you’ve been making. Right now I want to defend people’s ability to write down their thoughts in public, and I think the OP is strongly worth publishing in the situation we’re in. I could imagine a world where there was loads of great discussion of topics like what the OP is about, where the OP stands out as not having met a higher standard of effort to avoid mind-killing anyone that the other posts have, where I’d go “this is unnecessarily likely to make people feel defensive and like there’s subtle tribal politics underpinning its conclusions, consider these changes?” but right now I’m very pro “Cool idea, let me share my thoughts on the subject too.”
(Some background: The OP was discussed about 2 weeks ago on Elizabeth’s FB wall, and in it someone else was proposing a different reason why this post needed re-writing for PR reasons, and there I argued already that they shouldn’t put such high bars to writing things on people. I think that person’s specific suggestion, if taken seriously, would be incredibly harmful to public discourse regardless of its current health, whereas in this case I think your literal claims are just right. Regardless, I am strongly pro the post and others like it being published.)
But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community’s beliefs.
The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren’t trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.
(You said this elsewhere in the thread: “the goal is to have one’s beliefs correspond to reality—to use a conflict theory when that’s true, a mistake theory when that’s true”.)
Expected infrequent discussion of a theory shouldn’t lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example “If this statement is correct, it will be the only topic of all future discussions.”)
In general, it shouldn’t be possible to expect well-known systematic distortions for any reason, because they should’ve been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.
Consider a situation where:
People are discussing phenomenon X.
In fact, a conflict theory is a good explanation for phenomenon X.
However, people only state mistake theories for X, because conflict theories are taboo.
Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?
Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.
Aside from that, “How do you explain X?” is really ambiguous and anchors on well-understood rather than apt framing. “Does mistake theory explain this case well?” is better, because you may well use a bad theory to think about something while knowing it’s a bad theory for explaining it. If it’s the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it’s taboo and wasn’t developed is of course terrible, but it’s not a reason to embrace the bad theory as correct.
Perhaps (75% chance?), in part because I’ve spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.
It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing “Does mistake theory explain this case well?”)
It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I’ve observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others’ statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)
This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can’t become common knowledge.
That’s relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn’t seem to play a role here, and “doesn’t know about” is a level of taboo that contradicts the assumption I posited about the argument from selection effect being “well-known”.
Hm. Is “well-known” good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it’s literally the case that everybody knows that we’re not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn’t know.
There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
This is a very helpful comment, thank you!
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.
I generally endorse this line of reasoning.
Nice :-)