While I certainly have thoughts on all of this, let me point out one aspect of this system which I think is unusually dangerous and detrimental:
The ability (especially for arbitrary users, not just moderators) to take moderation actions that remove content, or prevent certain users from commenting, without leaving a clearly and publicly visible trace.
At the very least (if, say, you’re worried about something like “we don’t want comments sections to be cluttered with ‘post deleted’”), there ought to be a publicly viewable log of all moderation actions. (Consider the lobste.rs moderation log feature as an example of how such a thing might work.) This should apply to removal of comments and threads, and it should definitely also apply to banning a user from commenting on a post / on all of one’s posts.
Let me say again that I consider a moderation log to be the minimally acceptable moderation accountability feature on a site like this—ideally there would also be indicators in-context that a moderation action has taken place. But allowing totally invisible / untraceable moderation actions is a recipe for disaster.
Edit: For another example, note Scott’s register of bans/warnings, which is notable for the fact that Slate Star Codex is one guy’s personal blog and explicitly operates on a “Reign of Terror” moderation policy—yet the ban register is maintained, all warnings/bans/etc. are very visibly marked with red text right there in the comment thread which provokes them—and, I think, this greatly contributes to the atmosphere of open-mindedness that SSC is now rightly famous for.
I’m also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.
For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he’s been quite clear this is a condition for his participation. Lo and behold precisely these features have been introduced, with suspiciously convenient karma thresholds which allow EY (at his current karma level) to traceless delete/ban on his own promoted posts, yet exclude (as far as I can tell) the great majority of other writers with curated/front page posts from being able to do the same.
Given the popularity of EY’s writing (and LW2 wants to include future work of his), the LW2 team are obliged to weigh up the (likely detrimental) addition of these features versus the likely positives of his future posts. Going for the latter is probably the right judgement call to make, but let’s not pretend it is a principled one: we are, as the old saw goes, just haggling over the price.
Yeah, I didn’t want to make this a thread about discussing Eliezer’s opinion, so I didn’t put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.
Here is my rough summary of how this plays into my current perspective on things:
1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.
2. Allowing users to moderate their own front-page posts is not something that Eliezer requested (I think he would be happy with them just being personal posts), but is a natural consequence of wanting to allow users to moderate their own posts, while also not giving up our ability to promote the best content to the front-page and to curated
3. Allowing users to delete things without a trace was a request by Eliezer, but is also something I thought about independently to deal with stuff like spam and repeated offenders (for example, Eugine has created over 100 comments on one of Ozy’s posts, and you don’t want all of them to show up as deleted stubs). I expect we wouldn’t have built the future as it currently stands without Eliezer, but I hadn’t actually considered a moderation logs page like the one Said pointed out, and I actually quite like that idea, and don’t expect Eliezer to object too much to it. So that might be a solution that makes everyone reasonably happy.
I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.
As usual Greg, I will always come to you first if I ever need to deliver well-articulated sick burn that my victim needs to read twice before they can understand ;-)
Edit: Added a smiley to clarify this was meant as a joke.
I actually quite like the idea of a moderation log, and Ben and Ray also seem to like it. I hadn’t really considered that as an option, and my model is that Eliezer and other authors wouldn’t object to it either, so this seems like something I would be quite open to implementing.
I actually think some hesitation and thought is warranted on that particular feature. A naively-implemented auto-filled moderation log can significantly tighten the feedback loop for bad actors trying to evade bans. Maybe if there were a time delay, so moderation actions only become visible when they’re a minimum number of days old?
There is some sense in what you say, but… before we allow concerns like this to guide design decisions, it would be very good to do some reasonably thorough investigating about whether other platforms that implement moderation logs have this problem. (The admin and moderators of lobste.rs, for example, hang out in the #lobsters IRC channel on Freenode. Why not ask them if they have found the moderation log to result in a significant ban evasion issue?)
I really like the moderation log idea—I think it could be really good for people to have a place where they can go if they want to learn what the norms are empirically. I also propose there be a similar place which stores the comments explaining why posts are curated.
(Also note that Satvik Beri said to me I should do this a few months ago and I forgot and this is my fault.)
I’m just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.
I agree. There are a few feairly simple ways to implement this kind of transparancy.
When a comment is deleted, change it’s title to [deleted] and remove any content. This at least shows when censorship is happening and roughly how much.
When a comment is deleted, do as above but give users the option to show it by clicking on a “show comment” button or something similar.
Have a “show deleted comments” button on users profile pages. Users who want to avoid seeing the kind of content that is typically censored can do so. Those who would prefer to see everything can just enable the option and see all comments.
I think these features would add at least some transparancy to comment moderation. I’m still unsure how to make user bans transparent. I’m worried that without doing so, bad admins can just bad users they dislike and give the impression of a balanced discussion with little censorship.
User bans can be made transparent via the sort of centralized moderation log I described in my other comment. (For users banned by individual users, from their own personal blogs, there should probably also be a specific list, on the user page of the one who did the banning, of everyone they’ve banned from their posts.)
A central log would indeed allow anyone to see who was banned and when. My concern is more that such a solution would be practically ineffective. I think that most people reading an article aren’t likely to navigate to the central log and search the ban list to see how many people have been banned by said articles author. I’d like to see a system for flagging up bans which is both transparent and easy to access, ideally so anyone reading the page/discussion will notice if banning is taking place and to what extent. Sadly, I haven’t been able to think of a good solution which does that.
Yeah, I agree it doesn’t create the ideal level of transparency. In my mind, a moderation log is more similar to an accounting solution than an educational solution, where the purpose of accounting is not something that is constantly broadcasted to the whole system, but is instead used to backtrack if something has gone wrong, or if people are suspicious that there is some underlying systematic problem going on. Which might get you a lot of the value that you want, for significantly lower UI-complexity cost.
I believe it was Eliezer who (perhaps somewhere in the Sequences) enjoined us to consider a problem for at least five minutes, by the clock, before judging it to be unsolvable—and I have found that this applies in full measure in UX design.
Consider the following potential solutions (understanding them to be the products of a brainstorm only, not a full and rigorous design cycle):
A button (or other UI element, etc.) on every post, along the lines of “view history of moderation actions which apply to this post”.
A flag, attached to posts where moderation has occurred; which, when clicked, would take you to the central moderation log (or the user-specific one), and highlight all entries that apply to the referring post.
The same as #2, but with the flag coming in two “flavors”—one for “the OP has taken moderation actions”, and one for “the LW2 admin team has taken moderation actions”.
This is what I was able to come up with in five minutes of considering the problem. These solutions both seem to me to be quite unobtrusive, and yet at the same time, “transparent and easy to access”, as per your criteria. I also do not see any fundamental design or implementation difficulties that attach to them.
No doubt other approaches are possible; but at the very least, the problem seems eminently solvable, with a bit of effort.
Why exactly do you find it to be unusually dangerous and detrimental? The answer may seem obvious, but I think that it would be valuable to be explicit.
While I certainly have thoughts on all of this, let me point out one aspect of this system which I think is unusually dangerous and detrimental:
The ability (especially for arbitrary users, not just moderators) to take moderation actions that remove content, or prevent certain users from commenting, without leaving a clearly and publicly visible trace.
At the very least (if, say, you’re worried about something like “we don’t want comments sections to be cluttered with ‘post deleted’”), there ought to be a publicly viewable log of all moderation actions. (Consider the lobste.rs moderation log feature as an example of how such a thing might work.) This should apply to removal of comments and threads, and it should definitely also apply to banning a user from commenting on a post / on all of one’s posts.
Let me say again that I consider a moderation log to be the minimally acceptable moderation accountability feature on a site like this—ideally there would also be indicators in-context that a moderation action has taken place. But allowing totally invisible / untraceable moderation actions is a recipe for disaster.
Edit: For another example, note Scott’s register of bans/warnings, which is notable for the fact that Slate Star Codex is one guy’s personal blog and explicitly operates on a “Reign of Terror” moderation policy—yet the ban register is maintained, all warnings/bans/etc. are very visibly marked with red text right there in the comment thread which provokes them—and, I think, this greatly contributes to the atmosphere of open-mindedness that SSC is now rightly famous for.
I’m also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.
For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he’s been quite clear this is a condition for his participation. Lo and behold precisely these features have been introduced, with suspiciously convenient karma thresholds which allow EY (at his current karma level) to traceless delete/ban on his own promoted posts, yet exclude (as far as I can tell) the great majority of other writers with curated/front page posts from being able to do the same.
Given the popularity of EY’s writing (and LW2 wants to include future work of his), the LW2 team are obliged to weigh up the (likely detrimental) addition of these features versus the likely positives of his future posts. Going for the latter is probably the right judgement call to make, but let’s not pretend it is a principled one: we are, as the old saw goes, just haggling over the price.
Yeah, I didn’t want to make this a thread about discussing Eliezer’s opinion, so I didn’t put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.
Here is my rough summary of how this plays into my current perspective on things:
1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.
2. Allowing users to moderate their own front-page posts is not something that Eliezer requested (I think he would be happy with them just being personal posts), but is a natural consequence of wanting to allow users to moderate their own posts, while also not giving up our ability to promote the best content to the front-page and to curated
3. Allowing users to delete things without a trace was a request by Eliezer, but is also something I thought about independently to deal with stuff like spam and repeated offenders (for example, Eugine has created over 100 comments on one of Ozy’s posts, and you don’t want all of them to show up as deleted stubs). I expect we wouldn’t have built the future as it currently stands without Eliezer, but I hadn’t actually considered a moderation logs page like the one Said pointed out, and I actually quite like that idea, and don’t expect Eliezer to object too much to it. So that might be a solution that makes everyone reasonably happy.
As usual Greg, I will always come to you first if I ever need to deliver well-articulated sick burn that my victim needs to read twice before they can understand ;-)
Edit: Added a smiley to clarify this was meant as a joke.
Let’s focus on the substance, please.
I actually quite like the idea of a moderation log, and Ben and Ray also seem to like it. I hadn’t really considered that as an option, and my model is that Eliezer and other authors wouldn’t object to it either, so this seems like something I would be quite open to implementing.
I actually think some hesitation and thought is warranted on that particular feature. A naively-implemented auto-filled moderation log can significantly tighten the feedback loop for bad actors trying to evade bans. Maybe if there were a time delay, so moderation actions only become visible when they’re a minimum number of days old?
There is some sense in what you say, but… before we allow concerns like this to guide design decisions, it would be very good to do some reasonably thorough investigating about whether other platforms that implement moderation logs have this problem. (The admin and moderators of lobste.rs, for example, hang out in the #lobsters IRC channel on Freenode. Why not ask them if they have found the moderation log to result in a significant ban evasion issue?)
I really like the moderation log idea—I think it could be really good for people to have a place where they can go if they want to learn what the norms are empirically. I also propose there be a similar place which stores the comments explaining why posts are curated.
(Also note that Satvik Beri said to me I should do this a few months ago and I forgot and this is my fault.)
I’m just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.
I agree. There are a few feairly simple ways to implement this kind of transparancy.
When a comment is deleted, change it’s title to [deleted] and remove any content. This at least shows when censorship is happening and roughly how much.
When a comment is deleted, do as above but give users the option to show it by clicking on a “show comment” button or something similar.
Have a “show deleted comments” button on users profile pages. Users who want to avoid seeing the kind of content that is typically censored can do so. Those who would prefer to see everything can just enable the option and see all comments.
I think these features would add at least some transparancy to comment moderation. I’m still unsure how to make user bans transparent. I’m worried that without doing so, bad admins can just bad users they dislike and give the impression of a balanced discussion with little censorship.
User bans can be made transparent via the sort of centralized moderation log I described in my other comment. (For users banned by individual users, from their own personal blogs, there should probably also be a specific list, on the user page of the one who did the banning, of everyone they’ve banned from their posts.)
A central log would indeed allow anyone to see who was banned and when. My concern is more that such a solution would be practically ineffective. I think that most people reading an article aren’t likely to navigate to the central log and search the ban list to see how many people have been banned by said articles author. I’d like to see a system for flagging up bans which is both transparent and easy to access, ideally so anyone reading the page/discussion will notice if banning is taking place and to what extent. Sadly, I haven’t been able to think of a good solution which does that.
Yeah, I agree it doesn’t create the ideal level of transparency. In my mind, a moderation log is more similar to an accounting solution than an educational solution, where the purpose of accounting is not something that is constantly broadcasted to the whole system, but is instead used to backtrack if something has gone wrong, or if people are suspicious that there is some underlying systematic problem going on. Which might get you a lot of the value that you want, for significantly lower UI-complexity cost.
I believe it was Eliezer who (perhaps somewhere in the Sequences) enjoined us to consider a problem for at least five minutes, by the clock, before judging it to be unsolvable—and I have found that this applies in full measure in UX design.
Consider the following potential solutions (understanding them to be the products of a brainstorm only, not a full and rigorous design cycle):
A button (or other UI element, etc.) on every post, along the lines of “view history of moderation actions which apply to this post”.
A flag, attached to posts where moderation has occurred; which, when clicked, would take you to the central moderation log (or the user-specific one), and highlight all entries that apply to the referring post.
The same as #2, but with the flag coming in two “flavors”—one for “the OP has taken moderation actions”, and one for “the LW2 admin team has taken moderation actions”.
This is what I was able to come up with in five minutes of considering the problem. These solutions both seem to me to be quite unobtrusive, and yet at the same time, “transparent and easy to access”, as per your criteria. I also do not see any fundamental design or implementation difficulties that attach to them.
No doubt other approaches are possible; but at the very least, the problem seems eminently solvable, with a bit of effort.
Why exactly do you find it to be unusually dangerous and detrimental? The answer may seem obvious, but I think that it would be valuable to be explicit.