What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision “user xyz123 is abusing the voting mechanism” or “user xyz123 is a sockpuppet for user abc789″, describe my case to other mods, and only after getting their agreement I would learn who the “user xyz123” actually is.
(But of course, getting the database without anonymization—if that would be faster—would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that’s kinda the point that I don’t know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of “security by obscurity”, to avoid Eugine adjusting to my algorithms. (For example, if I would define karma-assassination as “a user X downvoted all comments by user Y” and make the information public, Eugine could simply downvote all comments but one, to avoid detection. Similarly, if sockpuppeting would be defined as “a user X posts no comments, and only upvotes everything but user Y”, Eugine could make X post exactly one comment, and upvote one random comment by someone else. The only way to make this game harder for the opponent is not to make the heuristics public. They would be merely explained to other moderators.)
So I would try different definitions of “karma assassination” and different definitions of “sockpuppets”, see what the algorithm reports, and whether looking at the reported data again matches my original intuition. (Maybe the algorithm reports too much, because e.g. if a user posted only one comment on LW, then downvoting his comment was detected as “downvoting all comments from a given user”, although I obviously didn’t have that in mind. Or maybe there was a spammer, and someone downvoted all his comments perfectly legitimately.)
Then the next step would be, as long as I believe I have a correct algorithm, to set up a script for monitoring the database, and reporting me the kind of behavior that matches the heuristic automatically. This is because I believe that investigating things reported by users is already too late, and introduces biases. Some people will not report karma assassination, because they will mistake it for genuine dislike by the community; especially the new users intimidated by the website. On the other hand, some people will report every single organic downvote, even if they well deserved it. I have seen both cases during my role. It’s better if an algorithm reports suspicious behavior. (The existing data would be used to define and test heuristics about what “suspicious behavior” is.)
That would have been what I wanted. However, Vaniver may have completely different ideas, and I am not speaking for him. Now it’s already too late for me; I have a new job and a small baby, not enough free time to spend examining patterns of LW data. Two years ago, I would have the time.
(Another thing is, the voting model has a few obvious security holes. I would need some changes in the voting mechanism implemented, preferably without having a long public debate about how exactly the current situation can be abused to take over the website by a simple script. If I had a free weekend, I could write a script that would nuke the whole website. If Eugine has at least average programming skills, he can do this too; and if we start an arms race against him, he may be motivated to do it as a final revenge.)
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven’t thought about it much.
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness… the user will simply notice that X upvotes them often and Y downvotes them often… they will start liking X and disliking Y… they will start getting pleasant feelings when looking at comments written by X (“my friend is writing here, I feel good”) and unpleasant feelings when looking at comments written by Y (“oh no, my nemesis again”)… and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other people would be suspected of being vengeful about downvotes, people would be more willing to downvote the former, which creates incentives for passively aggressive behavior. (I am talking about a situation where everyone suspects that X downvotes those who downvoted him, but X can plausibly deny doing that, claiming he genuinely disliked all the stuff he downvoted, and you can either have an infinite debate about it with X acting outraged about unfair accusations, or just let it slide but still everyone knows that downvoting X is bad for their own karma.)
tl;dr—the same reasons why the elections are secret
EDIT:
After reading Raemon’s comment I am less sure about what I wrote here. I still believe that public upvotes and downvotes can cause unnecessary drama, but maybe that would still be an improvement over the situation when a reasonable comment gets 10 downvotes from sockpuppet accounts, or someone gets one downvote for each comment including those written years ago, and it is not clearly visible what exactly is happening unless moderators get involved (and sometimes not even then).
On the other hand, I believe that some content (too stupid, or aggressive) should be removed from the debate. Maybe not deleted completely, but at least hidden by default (such as currently the comments with karma −5 or less). But I agree that this should not apply to not-completely-insane comments posted by newbies in good faith. Such comments should be merely sorted to the bottom of the page. What should be removed is violations of community norms, and “spamming” (i.e. trying to win a debate by quantity of comments that don’t bring new points, merely inflate the visibility of the already expressed ones).
At this moment I am imagining some kind of hybrid system, where upvotes (either private or public, no clear opinion on this yet) would be given freely, but downvotes could only be given for specific reasons (they would be equivalent to flagging) and in case of abuse the user could lose the ability to downvote (i.e. the downvotes would be either public, or at least visible to moderators).
And here is a quick fix idea: as the first step, make downvotes public for moderators. That would at least allow them to quickly detect and remove Eugine’s sockpuppets. -- For example, moderator could have a new button below each comment, which would display the list of downvoters (with hyperlinks to their user pages). Also, make a script that reverts all votes given by a user, and make it easily accessible from the “banned users” admin page (i.e. it can only be applied to already banned users). To help other moderators spot possible abuse, the name of the moderator who started the script for a user could be displayed on the same admin page. (For extra precaution, the “revert all votes” button could be made inaccessible for the moderator who banned the user, so at least two moderators must participate at a vote purge.)
It’s not actually obvious to me that downvotes are even especially useful. I understand what purpose they’re supposed to serve, but I’m not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers—I’ve heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Alternative, go with the Hacker News model of only enabling downvotes after you’ve accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off ‘comment score below threshold’ hiding, etc)
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook.
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
Actual spam could just be reported rather than downvoted
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying “this comment is unnecessarily rude”, which would also effectively halve the number of likes for the purpose of comment sorting.)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your “fans” vs “enemies” are, you will inevitably try to play to your “fans” and develop dislike for your “enemies.” I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Could you elaborate on what you mean by this? “Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted” also sounds like “playing to your fans” to me—in both cases you’re responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren’t entirely sure of who your target audience consists of.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn’t matter if there is a setting that makes the likes invisible to you in particular? It’s a tradeoff, some things may get worse, others may get better. I don’t have a clear sense of this tradeoff.
Imagine that you’re a new person who’s a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you.
The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have.
OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.
What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision “user xyz123 is abusing the voting mechanism” or “user xyz123 is a sockpuppet for user abc789″, describe my case to other mods, and only after getting their agreement I would learn who the “user xyz123” actually is.
(But of course, getting the database without anonymization—if that would be faster—would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that’s kinda the point that I don’t know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of “security by obscurity”, to avoid Eugine adjusting to my algorithms. (For example, if I would define karma-assassination as “a user X downvoted all comments by user Y” and make the information public, Eugine could simply downvote all comments but one, to avoid detection. Similarly, if sockpuppeting would be defined as “a user X posts no comments, and only upvotes everything but user Y”, Eugine could make X post exactly one comment, and upvote one random comment by someone else. The only way to make this game harder for the opponent is not to make the heuristics public. They would be merely explained to other moderators.)
So I would try different definitions of “karma assassination” and different definitions of “sockpuppets”, see what the algorithm reports, and whether looking at the reported data again matches my original intuition. (Maybe the algorithm reports too much, because e.g. if a user posted only one comment on LW, then downvoting his comment was detected as “downvoting all comments from a given user”, although I obviously didn’t have that in mind. Or maybe there was a spammer, and someone downvoted all his comments perfectly legitimately.)
Then the next step would be, as long as I believe I have a correct algorithm, to set up a script for monitoring the database, and reporting me the kind of behavior that matches the heuristic automatically. This is because I believe that investigating things reported by users is already too late, and introduces biases. Some people will not report karma assassination, because they will mistake it for genuine dislike by the community; especially the new users intimidated by the website. On the other hand, some people will report every single organic downvote, even if they well deserved it. I have seen both cases during my role. It’s better if an algorithm reports suspicious behavior. (The existing data would be used to define and test heuristics about what “suspicious behavior” is.)
That would have been what I wanted. However, Vaniver may have completely different ideas, and I am not speaking for him. Now it’s already too late for me; I have a new job and a small baby, not enough free time to spend examining patterns of LW data. Two years ago, I would have the time.
(Another thing is, the voting model has a few obvious security holes. I would need some changes in the voting mechanism implemented, preferably without having a long public debate about how exactly the current situation can be abused to take over the website by a simple script. If I had a free weekend, I could write a script that would nuke the whole website. If Eugine has at least average programming skills, he can do this too; and if we start an arms race against him, he may be motivated to do it as a final revenge.)
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven’t thought about it much.
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness… the user will simply notice that X upvotes them often and Y downvotes them often… they will start liking X and disliking Y… they will start getting pleasant feelings when looking at comments written by X (“my friend is writing here, I feel good”) and unpleasant feelings when looking at comments written by Y (“oh no, my nemesis again”)… and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other people would be suspected of being vengeful about downvotes, people would be more willing to downvote the former, which creates incentives for passively aggressive behavior. (I am talking about a situation where everyone suspects that X downvotes those who downvoted him, but X can plausibly deny doing that, claiming he genuinely disliked all the stuff he downvoted, and you can either have an infinite debate about it with X acting outraged about unfair accusations, or just let it slide but still everyone knows that downvoting X is bad for their own karma.)
tl;dr—the same reasons why the elections are secret
EDIT:
After reading Raemon’s comment I am less sure about what I wrote here. I still believe that public upvotes and downvotes can cause unnecessary drama, but maybe that would still be an improvement over the situation when a reasonable comment gets 10 downvotes from sockpuppet accounts, or someone gets one downvote for each comment including those written years ago, and it is not clearly visible what exactly is happening unless moderators get involved (and sometimes not even then).
On the other hand, I believe that some content (too stupid, or aggressive) should be removed from the debate. Maybe not deleted completely, but at least hidden by default (such as currently the comments with karma −5 or less). But I agree that this should not apply to not-completely-insane comments posted by newbies in good faith. Such comments should be merely sorted to the bottom of the page. What should be removed is violations of community norms, and “spamming” (i.e. trying to win a debate by quantity of comments that don’t bring new points, merely inflate the visibility of the already expressed ones).
At this moment I am imagining some kind of hybrid system, where upvotes (either private or public, no clear opinion on this yet) would be given freely, but downvotes could only be given for specific reasons (they would be equivalent to flagging) and in case of abuse the user could lose the ability to downvote (i.e. the downvotes would be either public, or at least visible to moderators).
And here is a quick fix idea: as the first step, make downvotes public for moderators. That would at least allow them to quickly detect and remove Eugine’s sockpuppets. -- For example, moderator could have a new button below each comment, which would display the list of downvoters (with hyperlinks to their user pages). Also, make a script that reverts all votes given by a user, and make it easily accessible from the “banned users” admin page (i.e. it can only be applied to already banned users). To help other moderators spot possible abuse, the name of the moderator who started the script for a user could be displayed on the same admin page. (For extra precaution, the “revert all votes” button could be made inaccessible for the moderator who banned the user, so at least two moderators must participate at a vote purge.)
It’s not actually obvious to me that downvotes are even especially useful. I understand what purpose they’re supposed to serve, but I’m not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers—I’ve heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Alternative, go with the Hacker News model of only enabling downvotes after you’ve accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off ‘comment score below threshold’ hiding, etc)
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying “this comment is unnecessarily rude”, which would also effectively halve the number of likes for the purpose of comment sorting.)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your “fans” vs “enemies” are, you will inevitably try to play to your “fans” and develop dislike for your “enemies.” I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Could you elaborate on what you mean by this? “Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted” also sounds like “playing to your fans” to me—in both cases you’re responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren’t entirely sure of who your target audience consists of.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn’t matter if there is a setting that makes the likes invisible to you in particular? It’s a tradeoff, some things may get worse, others may get better. I don’t have a clear sense of this tradeoff.
Imagine that you’re a new person who’s a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you.
The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have.
OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.