This gives most of the benefit of the “give people facebook censorship power” preference of Eliezer without the catastrophic game theoretic implications of doing that in an environment where you cannot choose who you subscribe to. I expect the ability to block users to drastically cut down both on perceived and actual low value content on the site and significantly improve user experience.
In this vein I’d find some sort of personal user-tagging system (similar to what RES offers) quite useful. I sometimes get users mixed up based on writing style or tone, and something like this would help distinguish that-user-I-found-obnoxious-but-quite-insightful from that-user-who-was-just-obnoxious.
If A ignores B, should we allow B to respond to the comments of A?
Yes. If other people also do not like B’s comments then they are free to ignore B too. If the other readers do like B’s responses then chances are A really is speaking bullshit and B correcting them is a desirable outcome.
disallowing invisible responses encourages B to make off-topic responses somewhere else (as followed the wake of the troll tax).
I think people stopped doing that later. During the first days of troll tax it felt like a cool rebellion, but later it became merely a trivial inconvenience.
I think that in some situations individual settings may help, but generally global settings are more useful.
Imagine a site with 100 users where 1 obvious troll comes and starts posting. Which option is better: (a) the first five users downvote the troll’s comments so they become invisible for the rest of the users, or (b) each of the 100 users sees the troll’s comments and must remove them individually? Now imagine dozen trolls.
I believe the former is better, because it requires 5% of the work to achieve the same result. And we need to get a good work-to-result ratio, especially if the site becomes more popular, it will attract more of the worst kind of users. There are people ready to post thousands of stupid comments. There are people ready to make a new user account every few weeks (perhaps using the same name and appending a new number) to get out of everyone’s personal killfiles. There are people ready to use proxy sites to register dozens of user accounts. I saw them on other websites. The more popular a site is, the more it is exposed to them. So it would be good to have mechanisms to deal with them automatically, globally.
For example, current LW would be vulnerable to the following kind of attack: someone (one person using proxy servers, or an organized group—just imagine that we attracted an attention of a larger mindkilled group and we seriously pissed them off) registers hundreds of user accounts, posts some comments, upvotes each other, accumulates a ton of karma, downvotes everyone else. A complete site takeover, possible to be done by a simple script.
There is a simple mechanism that would prevent that: Don’t give new users rights to upvote. A new user may only post comments, until they get for example 20 karma from the existing users; and only then they are allowed to upvote others. For more safety, introduce a time limit: you have to get 20 karma points and then wait another week, and only then you are allowed to upvote. -- A similar strategy is used by Stack Exchange: users get rights gradually, so there is a limited damage new users can do. You pay for your rights by contributing the content. And it seems to work.
EDIT: And if you want to invite someone important, who wouldn’t have the patience with the rules, you (website admin) can simply create an exception for them, so they can post their article immediately.
Ignore user feature.
This gives most of the benefit of the “give people facebook censorship power” preference of Eliezer without the catastrophic game theoretic implications of doing that in an environment where you cannot choose who you subscribe to. I expect the ability to block users to drastically cut down both on perceived and actual low value content on the site and significantly improve user experience.
In this vein I’d find some sort of personal user-tagging system (similar to what RES offers) quite useful. I sometimes get users mixed up based on writing style or tone, and something like this would help distinguish that-user-I-found-obnoxious-but-quite-insightful from that-user-who-was-just-obnoxious.
This would also solve several recurring drama festivals. Nice suggestion.
It can be difficult sometimes to convince A that they need to block B for the sake of their own sanity.
It’s also a feature that can be implemented as browser plugin if anyone with that skill cares—with or without endorsement from on high.
If A ignores B, should we allow B to respond to the comments of A?
On the one hand, allowing invisible responses gives B the last word in everything.
On the other hand, disallowing invisible responses encourages B to make off-topic responses somewhere else (as followed the wake of the troll tax).
Yes. If other people also do not like B’s comments then they are free to ignore B too. If the other readers do like B’s responses then chances are A really is speaking bullshit and B correcting them is a desirable outcome.
I think people stopped doing that later. During the first days of troll tax it felt like a cool rebellion, but later it became merely a trivial inconvenience.
I think that in some situations individual settings may help, but generally global settings are more useful.
Imagine a site with 100 users where 1 obvious troll comes and starts posting. Which option is better: (a) the first five users downvote the troll’s comments so they become invisible for the rest of the users, or (b) each of the 100 users sees the troll’s comments and must remove them individually? Now imagine dozen trolls.
I believe the former is better, because it requires 5% of the work to achieve the same result. And we need to get a good work-to-result ratio, especially if the site becomes more popular, it will attract more of the worst kind of users. There are people ready to post thousands of stupid comments. There are people ready to make a new user account every few weeks (perhaps using the same name and appending a new number) to get out of everyone’s personal killfiles. There are people ready to use proxy sites to register dozens of user accounts. I saw them on other websites. The more popular a site is, the more it is exposed to them. So it would be good to have mechanisms to deal with them automatically, globally.
For example, current LW would be vulnerable to the following kind of attack: someone (one person using proxy servers, or an organized group—just imagine that we attracted an attention of a larger mindkilled group and we seriously pissed them off) registers hundreds of user accounts, posts some comments, upvotes each other, accumulates a ton of karma, downvotes everyone else. A complete site takeover, possible to be done by a simple script.
There is a simple mechanism that would prevent that: Don’t give new users rights to upvote. A new user may only post comments, until they get for example 20 karma from the existing users; and only then they are allowed to upvote others. For more safety, introduce a time limit: you have to get 20 karma points and then wait another week, and only then you are allowed to upvote. -- A similar strategy is used by Stack Exchange: users get rights gradually, so there is a limited damage new users can do. You pay for your rights by contributing the content. And it seems to work.
EDIT: And if you want to invite someone important, who wouldn’t have the patience with the rules, you (website admin) can simply create an exception for them, so they can post their article immediately.