I agree that the distinction you pose is important. Or should be. I remember when we could rely on it more than we can today.
Unfortunately, one of the tactics of people gaming against freedom is to deliberately expand the definition of “interpersonal attack” in order to suppress ideas they dislike. We have reached the point where, for example:
The use/mention distinction with respect to certain taboo words is deliberately ignored, so that a mention is deliberately conflated with use and use is deliberately conflated with attack.
Posting a link to a peer-reviewed scientific paper on certain taboo subjects is instantly labeled “hate facts” and interpreted as interpersonal attack.
Can you propose any counterprogram against this sort of dishonesty other than rejecting the premise of safetyism entirely?
I’ve noticed in consistently good moderation that resists this kind of trolling/power game:
Making drama for the sake of it, even with a pretense, is usually regarded as a more severe infraction that any rudeness or personal attack in the first place. Creating extra work for the moderation team is frowned upon (don’t feed the trolls). Punish every escalation and provocation, not just the first in the thread.
Escalating conflicts and starting flamewars is a seen as more toxic than any specific mildly/moderately offensive post. Starting stuff repeatedly, especially with multiple different people is a fast ticket to a permaban. Anyone consistently and obviously lowering the quality of discussions needs to be removed ASAP.
As long as people are dishonestly gaming the system, there will always be problems and there is no silver bullet solution. It’s a fundamentally hard problem of balancing competing values. Any model proposed will have failings. The best we can do is to try to balance competing values appropriately for each individual platform. Each one will have different tilts but I doubt rejecting safety entirely is likely to be a good idea in most cases.
It’s often tempting to idealize one particular value or another but when any particular value is taken to an extreme, the others suffer greatly. If you can back away from a pure ideal in any dimension, then the overall result tends to be more functional and robust, though never perfect.
I agree that the distinction you pose is important. Or should be. I remember when we could rely on it more than we can today.
Unfortunately, one of the tactics of people gaming against freedom is to deliberately expand the definition of “interpersonal attack” in order to suppress ideas they dislike. We have reached the point where, for example:
The use/mention distinction with respect to certain taboo words is deliberately ignored, so that a mention is deliberately conflated with use and use is deliberately conflated with attack.
Posting a link to a peer-reviewed scientific paper on certain taboo subjects is instantly labeled “hate facts” and interpreted as interpersonal attack.
Can you propose any counterprogram against this sort of dishonesty other than rejecting the premise of safetyism entirely?
I’ve noticed in consistently good moderation that resists this kind of trolling/power game:
Making drama for the sake of it, even with a pretense, is usually regarded as a more severe infraction that any rudeness or personal attack in the first place. Creating extra work for the moderation team is frowned upon (don’t feed the trolls). Punish every escalation and provocation, not just the first in the thread.
Escalating conflicts and starting flamewars is a seen as more toxic than any specific mildly/moderately offensive post. Starting stuff repeatedly, especially with multiple different people is a fast ticket to a permaban. Anyone consistently and obviously lowering the quality of discussions needs to be removed ASAP.
As long as people are dishonestly gaming the system, there will always be problems and there is no silver bullet solution. It’s a fundamentally hard problem of balancing competing values. Any model proposed will have failings. The best we can do is to try to balance competing values appropriately for each individual platform. Each one will have different tilts but I doubt rejecting safety entirely is likely to be a good idea in most cases.
It’s often tempting to idealize one particular value or another but when any particular value is taken to an extreme, the others suffer greatly. If you can back away from a pure ideal in any dimension, then the overall result tends to be more functional and robust, though never perfect.