It works better on the individual level, and I certainly get why this feels more fair and valuable to an individual contributor.
But moderation is not just about individuals learning—it’s about the conversation being an interesting, valuable place to discuss things and learn.
Providing a good explanation for each moderation case is a fair amount of cognitive work. In a lot of cases it can be emotionally draining—if you started moderating a site because it had interesting content, but then you keep having to patiently explain the same things over and over to people who don’t get (or disagree with) the norms, it ends up being not fun, and then you risk your moderators burning out and conversational quality degrading.
It also means you have to scale moderation linearly with the number of people on the site, which can be hard to coordinate.
i.e. imagine a place with good conversation, and one person per week who posts something rude, or oblivious, or whatever. It’s not that hard to give that person an explanation.
But then if there are 10 people (or 1 prolific person) making bad comments every day, and you have to spend 70x the time providing explanations… on one hand, yes, if you patiently explain things each time, those 10 people might grow and become good commenters. But it makes you slower to respond. And now the people you wanted to participate in the good conversations see a comment stream with 10 unresponded to bad comments, and think “man, this is not the place where the productive discussion is happening.”
It’s not just about those 10 people’s potential to learn, it’s also about the people who are actually trying to have a productive conversation.
If you have 1 prolific person making comments every day that have to be moderated, the solution isn’t to delete those comments every day but to start by attempting to teach the person and ban the person if that attempt at teaching doesn’t work.
Currently, the moderation decisions aren’t only about moderators not responding to unresponded bad comments but moderators going further and forbidding other people from commenting on the relevant posts and explaining why they shouldn’t be there.
Karma votes and collapsing comments that get negative karma is a way to allow them to have less effect on good conversations. It’s the way quality norms got enforced on the old LessWrong. I think that the cases where that didn’t work are relatively few and that those call for engagement where there’s first an attempt to teach the person and the person is banned when that doesn’t work.
(I’m speaking here about contributions made in good faith. I don’t think moderating decisions to delete SPAM by new users needs explaining)
It works better on the individual level, and I certainly get why this feels more fair and valuable to an individual contributor.
But moderation is not just about individuals learning—it’s about the conversation being an interesting, valuable place to discuss things and learn.
Providing a good explanation for each moderation case is a fair amount of cognitive work. In a lot of cases it can be emotionally draining—if you started moderating a site because it had interesting content, but then you keep having to patiently explain the same things over and over to people who don’t get (or disagree with) the norms, it ends up being not fun, and then you risk your moderators burning out and conversational quality degrading.
It also means you have to scale moderation linearly with the number of people on the site, which can be hard to coordinate.
i.e. imagine a place with good conversation, and one person per week who posts something rude, or oblivious, or whatever. It’s not that hard to give that person an explanation.
But then if there are 10 people (or 1 prolific person) making bad comments every day, and you have to spend 70x the time providing explanations… on one hand, yes, if you patiently explain things each time, those 10 people might grow and become good commenters. But it makes you slower to respond. And now the people you wanted to participate in the good conversations see a comment stream with 10 unresponded to bad comments, and think “man, this is not the place where the productive discussion is happening.”
It’s not just about those 10 people’s potential to learn, it’s also about the people who are actually trying to have a productive conversation.
If you have 1 prolific person making comments every day that have to be moderated, the solution isn’t to delete those comments every day but to start by attempting to teach the person and ban the person if that attempt at teaching doesn’t work.
Currently, the moderation decisions aren’t only about moderators not responding to unresponded bad comments but moderators going further and forbidding other people from commenting on the relevant posts and explaining why they shouldn’t be there.
Karma votes and collapsing comments that get negative karma is a way to allow them to have less effect on good conversations. It’s the way quality norms got enforced on the old LessWrong. I think that the cases where that didn’t work are relatively few and that those call for engagement where there’s first an attempt to teach the person and the person is banned when that doesn’t work.
(I’m speaking here about contributions made in good faith. I don’t think moderating decisions to delete SPAM by new users needs explaining)