Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
a case of catering to utility monsters [...] incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
(as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly)
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
I wholly agree.
So it’s things like adopting lsusr’s suggestion to prefer statements to questions.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
Well, I’m certainly all for that.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
I wholly agree.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
What would any of what you’re alluding to look like, more concretely…?
(Of course I also object to the term “de-escalation” here, due to the implication of “escalation”, but maybe that’s beside the point.)
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
Some examples are two comments up, as well as your list of things that don’t work. Another move not mentioned so far is deciding to exit certain conversations.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)