this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid.
I actually love this norm. It prevents emotions from affecting judgement, and laying out arguments step by step makes them easier to understand.
I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That’s acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.
I don’t really know, the best I can offer is sort of vaguely gesturing at LessWrong’s moderation vector and pointing in a direction.
LW’s rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That’s the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well—it keeps HN protected against non-hackers.
ChangeMyView is somewhat under threat of bad content—if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it’s also under threat of loss of buy-in—people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.
When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at least consistent (even if the things we were consistent to weren’t optimal). That consistency provided a plausible alternative to the motive uncertainty created by subjective enforcement—for example, the admins told us we were fine to continue hosting discussions regarding gender and race that were being cracked down on elsewhere on Reddit.
Right now, I think LW is doing a good job of defending against bad content. I think what would make LW stronger is a semi-constitutional backbone to fall against in times of unrest. Kind of like how the 5th pillar of Wikipedia is to ignore all rules, yet policy is still the essential basis of editing discussions.
I would like to see, in the case of commenting guidelines, clearer definitions of what excess looks like. I think the subjective approach is fine for posts for now.
I agree that makes sense to me and assume your experience is probably representative.
I do wonder about a dynamic that might be obscured by your insight above. While the near term might tend to elevate a lower standard post(er) I would expect one of two paths going forward.
The bad path would be something along the line of Gresham’s Law where the more passionate but well informed and intelligent get crowded out by the mediation and intellectual “posers”. I suspect that has happened. Probably reading that in but I might infer that is something of the longer terms outcome your comment suggests.
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response. Emotion and passion are good but can cloud judgement so learning to step back, remove that aspect from one’s thinking and then evaluate one’s argument more objectively is very helpful. Both for any onlie forum and personally—and I would suggest even is something of a public good type thing in that it will become more a habit/personality trait that reflects in all aspects of one’s life and social interactions.
Do you have any sense about which path we might expect from this type of moderation standard?
Note the current setup is “ban and do so potentially for content a year old” and no feedback as to the specific user content that was the reason.
There’s dozens of reasons possible. Also note how the site moderators choose not to give any feedback but instead choose effectively useless vague statements that can be applied to any content or no content. See above the one from Habryka. It matches anything and nothing. See all the conditionals including “I am actually very unsure this criticism is even true” and “I am unwilling to give any clarification or specifics”. And see the one on “I don’t think you are at the level of your interlocutor” without specifying + or -.
Literally any piece of text matches the rules given and also fails to match.
As a result of this,
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response.
Is outside the range of possible outcomes.
Big picture wise, almost all social media attempts die. Reddit/Facebook/Twitter et al are the natural monopoly winners and there are thousands of losers. This is just an experiment, the 2.0 attempt, and I have to respect the site moderators and owner for trying something new.
Their true policy is not to give feedback but to ban essentially everyone but a small elite set of “niche” contributors who often happen to agree with each other already, so sending empty strings at each other is the same as writing a huge post yet again arguing the same ground.
I think my concluding thought is that this creates a situation of essentially intellectual sandcastles. A large quantity is things that sound intelligent, are built on unproven assumptions that sound correct, and occupy a lot of text to describe and ultimately are useless.
A dumber sounding, simpler idea based on strong empirical evidence, using usually the simplest possible theory to explain it, is almost always going to be the least wrong. (And the correct policy is to only add complexity to a theory when strong replicable evidence can’t be explained without adding the complexity)
I think the site name should be GreaterWrong, just now miri is about stopping machine intelligence research, openAI is closed, and so on.
I would personally say that norms are things people expect others to do, but where the response to them not doing that is simply to be surprised; this is a rule, something where the response to not doing it is some form of other people taking some sort of action against a person for breaking the rule. when the people to whom a rule applies and the people who enforce it are different, those people are called authorities.
I actually love this norm. It prevents emotions from affecting judgement, and laying out arguments step by step makes them easier to understand.
I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That’s acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.
Makes sense. Given that perspective, do you have any idea for a better approach?
I don’t really know, the best I can offer is sort of vaguely gesturing at LessWrong’s moderation vector and pointing in a direction.
LW’s rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That’s the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well—it keeps HN protected against non-hackers.
ChangeMyView is somewhat under threat of bad content—if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it’s also under threat of loss of buy-in—people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.
When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at least consistent (even if the things we were consistent to weren’t optimal). That consistency provided a plausible alternative to the motive uncertainty created by subjective enforcement—for example, the admins told us we were fine to continue hosting discussions regarding gender and race that were being cracked down on elsewhere on Reddit.
Right now, I think LW is doing a good job of defending against bad content. I think what would make LW stronger is a semi-constitutional backbone to fall against in times of unrest. Kind of like how the 5th pillar of Wikipedia is to ignore all rules, yet policy is still the essential basis of editing discussions.
I would like to see, in the case of commenting guidelines, clearer definitions of what excess looks like. I think the subjective approach is fine for posts for now.
I agree that makes sense to me and assume your experience is probably representative.
I do wonder about a dynamic that might be obscured by your insight above. While the near term might tend to elevate a lower standard post(er) I would expect one of two paths going forward.
The bad path would be something along the line of Gresham’s Law where the more passionate but well informed and intelligent get crowded out by the mediation and intellectual “posers”. I suspect that has happened. Probably reading that in but I might infer that is something of the longer terms outcome your comment suggests.
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response. Emotion and passion are good but can cloud judgement so learning to step back, remove that aspect from one’s thinking and then evaluate one’s argument more objectively is very helpful. Both for any onlie forum and personally—and I would suggest even is something of a public good type thing in that it will become more a habit/personality trait that reflects in all aspects of one’s life and social interactions.
Do you have any sense about which path we might expect from this type of moderation standard?
Note the current setup is “ban and do so potentially for content a year old” and no feedback as to the specific user content that was the reason.
There’s dozens of reasons possible. Also note how the site moderators choose not to give any feedback but instead choose effectively useless vague statements that can be applied to any content or no content. See above the one from Habryka. It matches anything and nothing. See all the conditionals including “I am actually very unsure this criticism is even true” and “I am unwilling to give any clarification or specifics”. And see the one on “I don’t think you are at the level of your interlocutor” without specifying + or -.
Literally any piece of text matches the rules given and also fails to match.
As a result of this,
Is outside the range of possible outcomes.
Big picture wise, almost all social media attempts die. Reddit/Facebook/Twitter et al are the natural monopoly winners and there are thousands of losers. This is just an experiment, the 2.0 attempt, and I have to respect the site moderators and owner for trying something new.
Their true policy is not to give feedback but to ban essentially everyone but a small elite set of “niche” contributors who often happen to agree with each other already, so sending empty strings at each other is the same as writing a huge post yet again arguing the same ground.
I think my concluding thought is that this creates a situation of essentially intellectual sandcastles. A large quantity is things that sound intelligent, are built on unproven assumptions that sound correct, and occupy a lot of text to describe and ultimately are useless.
A dumber sounding, simpler idea based on strong empirical evidence, using usually the simplest possible theory to explain it, is almost always going to be the least wrong. (And the correct policy is to only add complexity to a theory when strong replicable evidence can’t be explained without adding the complexity)
I think the site name should be GreaterWrong, just now miri is about stopping machine intelligence research, openAI is closed, and so on.
I would personally say that norms are things people expect others to do, but where the response to them not doing that is simply to be surprised; this is a rule, something where the response to not doing it is some form of other people taking some sort of action against a person for breaking the rule. when the people to whom a rule applies and the people who enforce it are different, those people are called authorities.