My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
If Bob deceives me,
I desire to believe that Bob deceives me;
If Bob does not deceive me,
I desire to believe that Bob does not deceive me;
Let me not become attached to beliefs I may not want.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does.
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
enacting conflict in the course of discussing conflict
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.
“Talking about conflict is a limited resource” seems very, very off to me.
There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It’s strictly better to have more of each of them.
Talking about conflict in ways that are wrong is damaging a resource (it’s causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it’s producing a resource.
EDIT: also note, informative discussion of conflict, such as in Robin Hanson’s work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means “talking about conflict is a limited resource” is backwards.
I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.
I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.
You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.
I’m going to try explaining my view and how it differs from the “politics is the mind killer” slogan.
People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they’re the opposite of costly.
Some people (most people?) are bad at talking about conflict. They’re likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it’s not surprising if high-disinformation conversations end up quite costly.
My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
Why would I believe this? The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then “politics is the mind-killer” is the wrong framing. Rather, “politics is a domain where people often try to kill each other’s minds” is closer.
In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it’s strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
If you see people making conflict theory models, and those models seem correct to you (or at least, you don’t have any epistemic criticism of them), then shutting down the discussions (on the basis that they’re conflict-theorist) is actively doing harm to this model-building process. You’re keeping everyone confused about where the adversarial optimization pressures are. That’s like preventing people from turning on the lights in a room that contains monsters.
Therefore, I object to talking about conflict theory models as “inherently costly to talk about” rather than “things some (not all!) people would rather not be talked about for various reasons”. They’re not inherently costly. They’re costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Thank you, this comment helped me understand your position quite a bit. You’re right, discussing conflict theories are not inherently costly, it’s that they’re often costly because powerful optimization pressures are punishing discussion of them.
I strongly agree with you here:
This is also a large part of my model of why discussions of conflict often go bad—power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.
(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.
Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I’ll look out for this further in my thinking, and consider whether it’s false. Perhaps, in this case, it is way easier than I’m suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)
The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I’m not sure we agree on the answer to that question, or what the current norms in this area should be. I’m working on a longer answer, maybe post-length, to Zach’s comment below, so I’ll see if I can present my thoughts on that.
This is a very helpful comment, thank you!
By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it’s replying).
… seems to be exactly why it’s so difficult to discuss a conflict theory with someone already convinced that it’s true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.
But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one’s unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one’s decoupling the current discussion from a (or any) conflict theory.