Do people think that a discussion forum on the moderation and deletion policies would be beneficial?
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we’ve compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn’t yet on our list—or which doesn’t quite match the way we worded it—or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).
This is not to say that I wouldn’t like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general “be excellent to each other” standard?
That’s a really good question; it makes me want to do actual experiments with social communities, which I’m not sure how you’d set up. Failing that, here are some ideas about what might happen:
Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they’ll tolerate it; in major cases, they’ll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who’d had the rules brought down on them before—or else they don’t, and the community loses good members.
This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community’s rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what’s okay and not okay.
The potential flaw in the “be excellent to each other” standard is disagreement about what’s excellent—either amongst the moderators, or between the moderators and the community. For this reason, I’d expect it to work better in smaller communities with fewer of either. (This suggests another corollary—smaller communities need fewer written rules—which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what’s okay and isn’t; when they’re punished for something which was okay before, they’ll be frustrated and likely resentful, neither of which is conducive to a pleasant environment. If the moderators agree but the users disagree with their consensus, well, one set or the other will have to change.
Of course, in online communities, simple benevolent dictatorships are a popular choice. This isn’t surprising, given that there is often exactly one person with real power (e.g. server access), which they may or may not choose to delegate. Two such channels I’m in demonstrate the differences in the above fairly well, if not perfectly (I’m not in any that really relies on a strict code of rules). One is very small (about a dozen people connected as I write this), and has exactly one rule*: “Be awesome.” The arbiter of awesome is the channel owner. Therefore, the channel is a collection of people who suit him. Since there is no other principle we claim to hold to (no standard against which to measure the dictator), and he’s not a jerk (obviously I don’t think so, since I’m still there), it works perfectly well.
The other is the one whose rules I linked earlier. It’s fairly large, but not enormous (~375 people connected right now). There are a few people who technically have power, but one to whom the channel “belongs” (the author of the work it’s a fan community of). Because he has better things to do than keep an eye on it, he delegates responsibility to ops who are selected almost entirely for one quality: he predicts that they will make moderation decisions he approves of. Between that criterion and an active side channel for discussing policy, we mostly avoid the problems of moderator disagreement, and the posted rules ensure that there are very few surprises for the users.
A brief digression: That same channel owner actually did do an experiment in the moderation of a social community. He wanted to know if you could design an algorithm for a bot to moderate an IRC channel, with the goal of optimizing the signal to noise ratio; various algorithms were discussed, and one was implemented. I would call it a tentative success; the channel in question does have very good SNR when active, but it moves slowly; the trivial chatter wasn’t replaced with insight, it was just removed. Also, the channel bot is supplemented by human mods, for the rare cases when the bot’s enforcement is being circumvented.
The algorithm he went with is not my favorite of the ones proposed, and I’d love to see a more rigorous experiment done—the trick would be acquiring ready bodies of participants.
Anyway. If instead of experimenting on controlled social groups, we surveyed existing groups that had survived, I think we’d find a lot of small communities with no or almost no codified rules, and then a mix of rules and judgment as they got larger. There would be a cap on the quantity of written rules that were actually enforced in any size of community, and I wouldn’t expect to see even one that relied 100% on a codified ruleset with no enforcer judgment at all.
(Now I kind of want to research some communities and write an article about this, although I don’t think it’d be particularly relevant for LW.)
*I’m told there is actually a second one: “No capitals in the topic.” This is more of a policy than a behavioral rule, though, and it began as an observation of the way things actually were.
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we’ve compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn’t yet on our list—or which doesn’t quite match the way we worded it—or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).
This is not to say that I wouldn’t like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general “be excellent to each other” standard?
That’s a really good question; it makes me want to do actual experiments with social communities, which I’m not sure how you’d set up. Failing that, here are some ideas about what might happen:
Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they’ll tolerate it; in major cases, they’ll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who’d had the rules brought down on them before—or else they don’t, and the community loses good members.
This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community’s rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what’s okay and not okay.
The potential flaw in the “be excellent to each other” standard is disagreement about what’s excellent—either amongst the moderators, or between the moderators and the community. For this reason, I’d expect it to work better in smaller communities with fewer of either. (This suggests another corollary—smaller communities need fewer written rules—which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what’s okay and isn’t; when they’re punished for something which was okay before, they’ll be frustrated and likely resentful, neither of which is conducive to a pleasant environment. If the moderators agree but the users disagree with their consensus, well, one set or the other will have to change.
Of course, in online communities, simple benevolent dictatorships are a popular choice. This isn’t surprising, given that there is often exactly one person with real power (e.g. server access), which they may or may not choose to delegate. Two such channels I’m in demonstrate the differences in the above fairly well, if not perfectly (I’m not in any that really relies on a strict code of rules). One is very small (about a dozen people connected as I write this), and has exactly one rule*: “Be awesome.” The arbiter of awesome is the channel owner. Therefore, the channel is a collection of people who suit him. Since there is no other principle we claim to hold to (no standard against which to measure the dictator), and he’s not a jerk (obviously I don’t think so, since I’m still there), it works perfectly well.
The other is the one whose rules I linked earlier. It’s fairly large, but not enormous (~375 people connected right now). There are a few people who technically have power, but one to whom the channel “belongs” (the author of the work it’s a fan community of). Because he has better things to do than keep an eye on it, he delegates responsibility to ops who are selected almost entirely for one quality: he predicts that they will make moderation decisions he approves of. Between that criterion and an active side channel for discussing policy, we mostly avoid the problems of moderator disagreement, and the posted rules ensure that there are very few surprises for the users.
A brief digression: That same channel owner actually did do an experiment in the moderation of a social community. He wanted to know if you could design an algorithm for a bot to moderate an IRC channel, with the goal of optimizing the signal to noise ratio; various algorithms were discussed, and one was implemented. I would call it a tentative success; the channel in question does have very good SNR when active, but it moves slowly; the trivial chatter wasn’t replaced with insight, it was just removed. Also, the channel bot is supplemented by human mods, for the rare cases when the bot’s enforcement is being circumvented.
The algorithm he went with is not my favorite of the ones proposed, and I’d love to see a more rigorous experiment done—the trick would be acquiring ready bodies of participants.
Anyway. If instead of experimenting on controlled social groups, we surveyed existing groups that had survived, I think we’d find a lot of small communities with no or almost no codified rules, and then a mix of rules and judgment as they got larger. There would be a cap on the quantity of written rules that were actually enforced in any size of community, and I wouldn’t expect to see even one that relied 100% on a codified ruleset with no enforcer judgment at all.
(Now I kind of want to research some communities and write an article about this, although I don’t think it’d be particularly relevant for LW.)
*I’m told there is actually a second one: “No capitals in the topic.” This is more of a policy than a behavioral rule, though, and it began as an observation of the way things actually were.