Most subreddits don’t try to solve moderation problems with karma markets, they just publish rules and ban violators. These rules can be quite specific, e.g. /r/programming requires submissions to contain code, /r/gamedev limits self-promotion, /r/math disallows homework problems. We have a decade of experience with all kinds of unproductive posters on old LW, so it should be easy to come up with rules (e.g. no politics, no sockpuppets, no one-liners, don’t write more than 1 in 10 recent comments...) Do you think we need more than that?
One frame you can take on this is asking the question of: What rules and guidelines do we want to have? Should we have the same rules and guidelines for all sections of the page? What should be the consequences of violating those rules and guidelines? Are the guidelines fuzzy and require interpretation, or are they maximally objective? If the latter, how do you deal with people gaming the guidelines, or you realizing that things are still going wrong with people following the guidelines.
I would be strongly interested in people’s suggestions for rules we want to have, and the ideal ways of enforcing them.
Actually, I’m not sure I want to propose any particular rules. LW2.0 is different from old LW after all. Just keep moderating by whim and let rules arise as a non-exhaustive summary of what you do. In my limited mod experience it has worked beautifully.
Yeah, I think this is basically correct, though for some reason me and other moderators have found this particularly emotionally taxing on the current LessWrong. This partially has to do with the kind of intuition that I think underlies a bunch of Christian’s comments on this thread, which I think can often lead to a feeling that every moderation decision has a high likelihood of a ton of people demanding a detailed explanation for those decisions, and that makes the cost of moderating prohibitive in many ways.
I don’t think it’s necessarily a bad thing when the cost of moderating is high enough to prevent micromanaging.
From my perspective in most cases where you want to moderate, you want the person who wrote the post you moderate to understand why you made the moderation decision to be able to act differently in the past. That works a lot better when the person gets an explanation about their mistake.
It works better on the individual level, and I certainly get why this feels more fair and valuable to an individual contributor.
But moderation is not just about individuals learning—it’s about the conversation being an interesting, valuable place to discuss things and learn.
Providing a good explanation for each moderation case is a fair amount of cognitive work. In a lot of cases it can be emotionally draining—if you started moderating a site because it had interesting content, but then you keep having to patiently explain the same things over and over to people who don’t get (or disagree with) the norms, it ends up being not fun, and then you risk your moderators burning out and conversational quality degrading.
It also means you have to scale moderation linearly with the number of people on the site, which can be hard to coordinate.
i.e. imagine a place with good conversation, and one person per week who posts something rude, or oblivious, or whatever. It’s not that hard to give that person an explanation.
But then if there are 10 people (or 1 prolific person) making bad comments every day, and you have to spend 70x the time providing explanations… on one hand, yes, if you patiently explain things each time, those 10 people might grow and become good commenters. But it makes you slower to respond. And now the people you wanted to participate in the good conversations see a comment stream with 10 unresponded to bad comments, and think “man, this is not the place where the productive discussion is happening.”
It’s not just about those 10 people’s potential to learn, it’s also about the people who are actually trying to have a productive conversation.
If you have 1 prolific person making comments every day that have to be moderated, the solution isn’t to delete those comments every day but to start by attempting to teach the person and ban the person if that attempt at teaching doesn’t work.
Currently, the moderation decisions aren’t only about moderators not responding to unresponded bad comments but moderators going further and forbidding other people from commenting on the relevant posts and explaining why they shouldn’t be there.
Karma votes and collapsing comments that get negative karma is a way to allow them to have less effect on good conversations. It’s the way quality norms got enforced on the old LessWrong. I think that the cases where that didn’t work are relatively few and that those call for engagement where there’s first an attempt to teach the person and the person is banned when that doesn’t work.
(I’m speaking here about contributions made in good faith. I don’t think moderating decisions to delete SPAM by new users needs explaining)
It’s important to have a clear moderation policy somewhere, even if the moderation policy is simply, “These are a non-exhaustive set of rules. We may remove content that tries to ignore the spirit of them”. People react less negatively if they’ve been informed.
Yeah. We do have the Frontpage moderation guidelines but they aren’t as visible, and we’re planning to add a link to them on the post and comment forms.
I feel like you haven’t been particularly happy with that approach of ours in the past, though I might have misread you in this respect. My read was that you were pretty frustrated with a bunch of the moderation decisions related to past comments of yours. Though the crux for that might not at all be related to the meta-principle of moderation, and had simply more to do with differing intuitions about what comments should be moderated.
Seems plausible that just rules are good enough. There are benefits to having more dynamic stuff, and I would guess that a lot of LessWrong’s success partially comes from its ability to cover a large range of subjects, and to draw deep connections between things that initially seem unrelated. At the moment it seems unlikely that a system as restricted as a StackExchange seems like the best choice for us, but moving more in that direction might get us a lot of the benefit.
Most subreddits don’t try to solve moderation problems with karma markets, they just publish rules and ban violators. These rules can be quite specific, e.g. /r/programming requires submissions to contain code, /r/gamedev limits self-promotion, /r/math disallows homework problems. We have a decade of experience with all kinds of unproductive posters on old LW, so it should be easy to come up with rules (e.g. no politics, no sockpuppets, no one-liners, don’t write more than 1 in 10 recent comments...) Do you think we need more than that?
One frame you can take on this is asking the question of: What rules and guidelines do we want to have? Should we have the same rules and guidelines for all sections of the page? What should be the consequences of violating those rules and guidelines? Are the guidelines fuzzy and require interpretation, or are they maximally objective? If the latter, how do you deal with people gaming the guidelines, or you realizing that things are still going wrong with people following the guidelines.
I would be strongly interested in people’s suggestions for rules we want to have, and the ideal ways of enforcing them.
Actually, I’m not sure I want to propose any particular rules. LW2.0 is different from old LW after all. Just keep moderating by whim and let rules arise as a non-exhaustive summary of what you do. In my limited mod experience it has worked beautifully.
Yeah, I think this is basically correct, though for some reason me and other moderators have found this particularly emotionally taxing on the current LessWrong. This partially has to do with the kind of intuition that I think underlies a bunch of Christian’s comments on this thread, which I think can often lead to a feeling that every moderation decision has a high likelihood of a ton of people demanding a detailed explanation for those decisions, and that makes the cost of moderating prohibitive in many ways.
I don’t think it’s necessarily a bad thing when the cost of moderating is high enough to prevent micromanaging.
From my perspective in most cases where you want to moderate, you want the person who wrote the post you moderate to understand why you made the moderation decision to be able to act differently in the past. That works a lot better when the person gets an explanation about their mistake.
It works better on the individual level, and I certainly get why this feels more fair and valuable to an individual contributor.
But moderation is not just about individuals learning—it’s about the conversation being an interesting, valuable place to discuss things and learn.
Providing a good explanation for each moderation case is a fair amount of cognitive work. In a lot of cases it can be emotionally draining—if you started moderating a site because it had interesting content, but then you keep having to patiently explain the same things over and over to people who don’t get (or disagree with) the norms, it ends up being not fun, and then you risk your moderators burning out and conversational quality degrading.
It also means you have to scale moderation linearly with the number of people on the site, which can be hard to coordinate.
i.e. imagine a place with good conversation, and one person per week who posts something rude, or oblivious, or whatever. It’s not that hard to give that person an explanation.
But then if there are 10 people (or 1 prolific person) making bad comments every day, and you have to spend 70x the time providing explanations… on one hand, yes, if you patiently explain things each time, those 10 people might grow and become good commenters. But it makes you slower to respond. And now the people you wanted to participate in the good conversations see a comment stream with 10 unresponded to bad comments, and think “man, this is not the place where the productive discussion is happening.”
It’s not just about those 10 people’s potential to learn, it’s also about the people who are actually trying to have a productive conversation.
If you have 1 prolific person making comments every day that have to be moderated, the solution isn’t to delete those comments every day but to start by attempting to teach the person and ban the person if that attempt at teaching doesn’t work.
Currently, the moderation decisions aren’t only about moderators not responding to unresponded bad comments but moderators going further and forbidding other people from commenting on the relevant posts and explaining why they shouldn’t be there.
Karma votes and collapsing comments that get negative karma is a way to allow them to have less effect on good conversations. It’s the way quality norms got enforced on the old LessWrong. I think that the cases where that didn’t work are relatively few and that those call for engagement where there’s first an attempt to teach the person and the person is banned when that doesn’t work.
(I’m speaking here about contributions made in good faith. I don’t think moderating decisions to delete SPAM by new users needs explaining)
It’s important to have a clear moderation policy somewhere, even if the moderation policy is simply, “These are a non-exhaustive set of rules. We may remove content that tries to ignore the spirit of them”. People react less negatively if they’ve been informed.
Yeah. We do have the Frontpage moderation guidelines but they aren’t as visible, and we’re planning to add a link to them on the post and comment forms.
I very much agree with this approach, and recommend this tumblr post as the best commentary I’ve read on the general principle that suggests it.
I feel like you haven’t been particularly happy with that approach of ours in the past, though I might have misread you in this respect. My read was that you were pretty frustrated with a bunch of the moderation decisions related to past comments of yours. Though the crux for that might not at all be related to the meta-principle of moderation, and had simply more to do with differing intuitions about what comments should be moderated.
Seems plausible that just rules are good enough. There are benefits to having more dynamic stuff, and I would guess that a lot of LessWrong’s success partially comes from its ability to cover a large range of subjects, and to draw deep connections between things that initially seem unrelated. At the moment it seems unlikely that a system as restricted as a StackExchange seems like the best choice for us, but moving more in that direction might get us a lot of the benefit.