Taken a bit further, however, it explains why valuing “safety” is extremely dangerous—so dangerous that, in fact, online communities should consciously reject it as a goal.
The problem is that when you make “safety” a goal, you run a very high risk of handing control of your community to the loudest and most insistent performers of offendedness and indignation.
This failure mode might be manageable if the erosion of freedom by safetyism were still merely an accidental and universally regretted effect of trying to have useful norms about politeness. I can remember when that was true, but it is no longer the case.
These days, safetyism is often—even usually—what George Carlin memorably tagged “Fascism masquerading as good manners”. It’s motivated by an active intention to stamp out what the safetyists regard as wrongspeech and badthink, with considerations of safety an increasingly thin pretext.
Whenever that’s true, the kinds of reasonable compromises that used to be possible with honest and well-intentioned safetyists cannot be made any more. The only way to counterprogram against the dishonest kind is radical rejection—telling safetyism that we refuse to be controlled through it.
Yes, this means that enforcing useful norms of politeness becomes more difficult. While this is unfortunate, it is becoming clearer by the day that the only alternative is the death of free speech—and, consequently, the strangulation of rational discourse.
This is kind of true, but taken seriously it only leaves “freedom” as an achievable goal, which I don’t think is right. I didn’t say much about it because it seems to me that this kind of weaponized safety is not a general feature of online communities, but rather a feature particular to the present moment, and the correct solution is on the openness axis: don’t let safetyists into your community, and kick them out quickly once they show their colors.
Also, the support for “safety” among these people is more on the level of slogan than actual practice. My experience is that groups which place a high priority on this version of “safety” are in fact riven with drama and strife. If you prioritise actual safety and not just the slogan, you’ll find you still have to kick out the people who constantly hide behind their version of “safety”.
I agree—I think there are many communities which easily achieve a high degree of safety without “safetyism”, typically by being relatively homogeneous or having external sources of trust and goodwill among their participants. LW is an example.
I think there is an important distinction between being “safe” from ideas and “safe” from interpersonal attacks. In an online space, I expect moderation to control more for the latter than the former, protecting not against wrongspeech so much as various forms of harassment.
Rational discourse is rarely possible in an environment that is not protected by moderation or at least shared norms of appropriate communication (this protection tends to be correlated with “openness”). Having free speech on a particular platform is rarely useful if it’s drowned out by toxicity. I support strong freedom of ideas (within the bounds of topicality) but when the expression of those ideas is in bad faith, there is no great value in protecting that particular form of expression.
There is a hypothesis that that unconstrained speech and debate will lead to wrong concepts fading away and the less wrong concepts rising to more common acceptance but internet history suggests that this ideal can’t last for long in a truly free space unless that freedom is never actually tested. As soon as bad actors are involved, you either have to restrict freedom or else experience a degradation in discourse (or both). If safety is not considered, then a platform effectively operates at the level of its worst users.
I agree that the distinction you pose is important. Or should be. I remember when we could rely on it more than we can today.
Unfortunately, one of the tactics of people gaming against freedom is to deliberately expand the definition of “interpersonal attack” in order to suppress ideas they dislike. We have reached the point where, for example:
The use/mention distinction with respect to certain taboo words is deliberately ignored, so that a mention is deliberately conflated with use and use is deliberately conflated with attack.
Posting a link to a peer-reviewed scientific paper on certain taboo subjects is instantly labeled “hate facts” and interpreted as interpersonal attack.
Can you propose any counterprogram against this sort of dishonesty other than rejecting the premise of safetyism entirely?
I’ve noticed in consistently good moderation that resists this kind of trolling/power game:
Making drama for the sake of it, even with a pretense, is usually regarded as a more severe infraction that any rudeness or personal attack in the first place. Creating extra work for the moderation team is frowned upon (don’t feed the trolls). Punish every escalation and provocation, not just the first in the thread.
Escalating conflicts and starting flamewars is a seen as more toxic than any specific mildly/moderately offensive post. Starting stuff repeatedly, especially with multiple different people is a fast ticket to a permaban. Anyone consistently and obviously lowering the quality of discussions needs to be removed ASAP.
As long as people are dishonestly gaming the system, there will always be problems and there is no silver bullet solution. It’s a fundamentally hard problem of balancing competing values. Any model proposed will have failings. The best we can do is to try to balance competing values appropriately for each individual platform. Each one will have different tilts but I doubt rejecting safety entirely is likely to be a good idea in most cases.
It’s often tempting to idealize one particular value or another but when any particular value is taken to an extreme, the others suffer greatly. If you can back away from a pure ideal in any dimension, then the overall result tends to be more functional and robust, though never perfect.
I agree with the reasoning in this essay.
Taken a bit further, however, it explains why valuing “safety” is extremely dangerous—so dangerous that, in fact, online communities should consciously reject it as a goal.
The problem is that when you make “safety” a goal, you run a very high risk of handing control of your community to the loudest and most insistent performers of offendedness and indignation.
This failure mode might be manageable if the erosion of freedom by safetyism were still merely an accidental and universally regretted effect of trying to have useful norms about politeness. I can remember when that was true, but it is no longer the case.
These days, safetyism is often—even usually—what George Carlin memorably tagged “Fascism masquerading as good manners”. It’s motivated by an active intention to stamp out what the safetyists regard as wrongspeech and badthink, with considerations of safety an increasingly thin pretext.
Whenever that’s true, the kinds of reasonable compromises that used to be possible with honest and well-intentioned safetyists cannot be made any more. The only way to counterprogram against the dishonest kind is radical rejection—telling safetyism that we refuse to be controlled through it.
Yes, this means that enforcing useful norms of politeness becomes more difficult. While this is unfortunate, it is becoming clearer by the day that the only alternative is the death of free speech—and, consequently, the strangulation of rational discourse.
This is kind of true, but taken seriously it only leaves “freedom” as an achievable goal, which I don’t think is right. I didn’t say much about it because it seems to me that this kind of weaponized safety is not a general feature of online communities, but rather a feature particular to the present moment, and the correct solution is on the openness axis: don’t let safetyists into your community, and kick them out quickly once they show their colors.
Also, the support for “safety” among these people is more on the level of slogan than actual practice. My experience is that groups which place a high priority on this version of “safety” are in fact riven with drama and strife. If you prioritise actual safety and not just the slogan, you’ll find you still have to kick out the people who constantly hide behind their version of “safety”.
I agree—I think there are many communities which easily achieve a high degree of safety without “safetyism”, typically by being relatively homogeneous or having external sources of trust and goodwill among their participants. LW is an example.
I think there is an important distinction between being “safe” from ideas and “safe” from interpersonal attacks. In an online space, I expect moderation to control more for the latter than the former, protecting not against wrongspeech so much as various forms of harassment.
Rational discourse is rarely possible in an environment that is not protected by moderation or at least shared norms of appropriate communication (this protection tends to be correlated with “openness”). Having free speech on a particular platform is rarely useful if it’s drowned out by toxicity. I support strong freedom of ideas (within the bounds of topicality) but when the expression of those ideas is in bad faith, there is no great value in protecting that particular form of expression.
There is a hypothesis that that unconstrained speech and debate will lead to wrong concepts fading away and the less wrong concepts rising to more common acceptance but internet history suggests that this ideal can’t last for long in a truly free space unless that freedom is never actually tested. As soon as bad actors are involved, you either have to restrict freedom or else experience a degradation in discourse (or both). If safety is not considered, then a platform effectively operates at the level of its worst users.
I agree that the distinction you pose is important. Or should be. I remember when we could rely on it more than we can today.
Unfortunately, one of the tactics of people gaming against freedom is to deliberately expand the definition of “interpersonal attack” in order to suppress ideas they dislike. We have reached the point where, for example:
The use/mention distinction with respect to certain taboo words is deliberately ignored, so that a mention is deliberately conflated with use and use is deliberately conflated with attack.
Posting a link to a peer-reviewed scientific paper on certain taboo subjects is instantly labeled “hate facts” and interpreted as interpersonal attack.
Can you propose any counterprogram against this sort of dishonesty other than rejecting the premise of safetyism entirely?
I’ve noticed in consistently good moderation that resists this kind of trolling/power game:
Making drama for the sake of it, even with a pretense, is usually regarded as a more severe infraction that any rudeness or personal attack in the first place. Creating extra work for the moderation team is frowned upon (don’t feed the trolls). Punish every escalation and provocation, not just the first in the thread.
Escalating conflicts and starting flamewars is a seen as more toxic than any specific mildly/moderately offensive post. Starting stuff repeatedly, especially with multiple different people is a fast ticket to a permaban. Anyone consistently and obviously lowering the quality of discussions needs to be removed ASAP.
As long as people are dishonestly gaming the system, there will always be problems and there is no silver bullet solution. It’s a fundamentally hard problem of balancing competing values. Any model proposed will have failings. The best we can do is to try to balance competing values appropriately for each individual platform. Each one will have different tilts but I doubt rejecting safety entirely is likely to be a good idea in most cases.
It’s often tempting to idealize one particular value or another but when any particular value is taken to an extreme, the others suffer greatly. If you can back away from a pure ideal in any dimension, then the overall result tends to be more functional and robust, though never perfect.