I would prefer to call it guidelines, and to generally frame the thing not as “you must follow this, or you will get banned” but rather “we have a community of polite and productive discourse (though we are not perfect), and following these guidelines will probably help you fit in nicely”.
We already have some informal norms. Making them explicit is potentially useful. Not just for us, but maybe for someone who would like to replicate the quality of “being much better than internet’s average” on some website unrelated to rationality or AI.
On the other hand, sometimes the norm proposals get so complicated and abstract, that I do not really believe I would be capable of following (or even remembering) them in everyday life. Like, maybe it’s me being dumb or posting too late at night, but sometimes the debates get so meta that I do not even understand what both sides are saying, so it is scary to imagine that some of that gets codified as an official norm to follow.
As you say, we already have informal norms. And those norms determine what gets upvoted/downvoted, and also what moderators may take action on. To the extent those norms exist and getting acted on already, it seems pretty good to me to try to express them explicitly.
I think the challenge might be accurately communicating what enforcement of the norms looks like so people aren’t afraid of the wrong thing. I can see not warning them enough (if we lied and said there’s no possibility of banning ever), or warning them too much and they think we scrutinize every comment.
Seems hard, because I want to say “yes, if you fail at too many of these, we will give you a warning, and then a rate limit, and eventually ban you”, that’s a necessary part of maintaining a garden, but we also want people to not get too afraid.
Also currently we plan to experiment with “automoderation” where, for example, users with negative karma get rate-limited, and seems good to be able to automatically send them and say “very likely you’re getting downvoted for doing something on <list> wrong”.
Yeah, that does seem like a good goal. Under my current thinking, what gets upheld by the moderators is our understanding of what good discourse looks like, and the list is trying to gesture at that. And then maybe it is challenging because my models of good discourse will have pieces that are pretty meta? I’m not sure, will see what comes up when I try to write more things out.
I would prefer to call it guidelines, and to generally frame the thing not as “you must follow this, or you will get banned” but rather “we have a community of polite and productive discourse (though we are not perfect), and following these guidelines will probably help you fit in nicely”.
We already have some informal norms. Making them explicit is potentially useful. Not just for us, but maybe for someone who would like to replicate the quality of “being much better than internet’s average” on some website unrelated to rationality or AI.
On the other hand, sometimes the norm proposals get so complicated and abstract, that I do not really believe I would be capable of following (or even remembering) them in everyday life. Like, maybe it’s me being dumb or posting too late at night, but sometimes the debates get so meta that I do not even understand what both sides are saying, so it is scary to imagine that some of that gets codified as an official norm to follow.
As you say, we already have informal norms. And those norms determine what gets upvoted/downvoted, and also what moderators may take action on. To the extent those norms exist and getting acted on already, it seems pretty good to me to try to express them explicitly.
I think the challenge might be accurately communicating what enforcement of the norms looks like so people aren’t afraid of the wrong thing. I can see not warning them enough (if we lied and said there’s no possibility of banning ever), or warning them too much and they think we scrutinize every comment.
Seems hard, because I want to say “yes, if you fail at too many of these, we will give you a warning, and then a rate limit, and eventually ban you”, that’s a necessary part of maintaining a garden, but we also want people to not get too afraid.
Also currently we plan to experiment with “automoderation” where, for example, users with negative karma get rate-limited, and seems good to be able to automatically send them and say “very likely you’re getting downvoted for doing something on <list> wrong”.
Yeah, that does seem like a good goal. Under my current thinking, what gets upheld by the moderators is our understanding of what good discourse looks like, and the list is trying to gesture at that. And then maybe it is challenging because my models of good discourse will have pieces that are pretty meta? I’m not sure, will see what comes up when I try to write more things out.