I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:
“A tangential note on third-party technical contributions to LW (if that’s a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.” (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)
That’s obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there’s no one person with both the technical and moral authority to:
set the rules that all participants have to abide by, and enforce them
decide principles for what’s on-topic and what’s off-topic
receive reports of trolls, and warn or ban them
respond to complaints about the site not working well
decide what the site features should be, and implement the high-priority ones
Pretty much any successful subreddit, even smallish ones, will have a team of admins who handle this stuff, and who can be trusted to look at things that pop up within a day or so (at least collectively). The highest intellectual-quality subreddit I know of, /r/AskHistorians, has extremely active and rigorous moderation, to the extent that a majority of comments are often deleted. Since we aren’t on Reddit itself, I don’t think we need to go quite that far, but there has to be something in place.
Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:
1) remove individual comments; and 2) ban individual users.
It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can’t even revert the downvotes made by those accounts. You can’t detect the sockpuppets that don’t post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn’t like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What’s the point? It will cost Eugine about 10 seconds to create a new one.
(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like “yeah, maybe the mods are too poweful, we need to stop them”, and you keep banging your head against the wall in frustration, wishing you actually had a fraction of those power Eugine accuses you of having.)
As the situation is now, the moderators are completely powerless to prevent or even reduce Eugine’s brigading, and the tech support doesn’t give a fuck, and will cite privacy concerns when you ask them for more direct access to the database. At least that is my experience, as a former moderator. Appointing a new moderator, or even hundred new moderators, would not change anything about this, unless they get a direct access to the data, or a more supportive tech support.
EDIT:
And before the problem is fixed, what good will it do to send new users here? First, Eugine will automatically downvote all women. Second, Eugine will downvote anyone who disagrees with him. It’s fucking motivating to write for a website where an obsessed user can de facto single-handedly remove all your content and/or moderate the whole discussion about it. And everyone is just looking away and pretending that this doesn’t happen, and the real problem is… whatever else.
Come on, if LW is unable to enforce a ban of a single person blatantly abusing the rules and harrassing many users who actually contributed or wanted to contribute some quality content… the solution certainly isn’t to keep telling more people to come and contribute. Let’s finally talk about the elephant in the room.
(Mentioning the elephant in the room will get your comment immediately downvoted to −10 though. Just saying.)
Was including tech support under “admin/moderation”—obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.
That’s okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn’t change anything, such as: appoint new or more moderators. (I am not saying more help wouldn’t be welcome, it’s just that without better access to data, they also couldn’t achieve much.)
Wow, that is a pretty big issue. Thank you for mentioning this.
Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don’t realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.
What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision “user xyz123 is abusing the voting mechanism” or “user xyz123 is a sockpuppet for user abc789″, describe my case to other mods, and only after getting their agreement I would learn who the “user xyz123” actually is.
(But of course, getting the database without anonymization—if that would be faster—would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that’s kinda the point that I don’t know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of “security by obscurity”, to avoid Eugine adjusting to my algorithms. (For example, if I would define karma-assassination as “a user X downvoted all comments by user Y” and make the information public, Eugine could simply downvote all comments but one, to avoid detection. Similarly, if sockpuppeting would be defined as “a user X posts no comments, and only upvotes everything but user Y”, Eugine could make X post exactly one comment, and upvote one random comment by someone else. The only way to make this game harder for the opponent is not to make the heuristics public. They would be merely explained to other moderators.)
So I would try different definitions of “karma assassination” and different definitions of “sockpuppets”, see what the algorithm reports, and whether looking at the reported data again matches my original intuition. (Maybe the algorithm reports too much, because e.g. if a user posted only one comment on LW, then downvoting his comment was detected as “downvoting all comments from a given user”, although I obviously didn’t have that in mind. Or maybe there was a spammer, and someone downvoted all his comments perfectly legitimately.)
Then the next step would be, as long as I believe I have a correct algorithm, to set up a script for monitoring the database, and reporting me the kind of behavior that matches the heuristic automatically. This is because I believe that investigating things reported by users is already too late, and introduces biases. Some people will not report karma assassination, because they will mistake it for genuine dislike by the community; especially the new users intimidated by the website. On the other hand, some people will report every single organic downvote, even if they well deserved it. I have seen both cases during my role. It’s better if an algorithm reports suspicious behavior. (The existing data would be used to define and test heuristics about what “suspicious behavior” is.)
That would have been what I wanted. However, Vaniver may have completely different ideas, and I am not speaking for him. Now it’s already too late for me; I have a new job and a small baby, not enough free time to spend examining patterns of LW data. Two years ago, I would have the time.
(Another thing is, the voting model has a few obvious security holes. I would need some changes in the voting mechanism implemented, preferably without having a long public debate about how exactly the current situation can be abused to take over the website by a simple script. If I had a free weekend, I could write a script that would nuke the whole website. If Eugine has at least average programming skills, he can do this too; and if we start an arms race against him, he may be motivated to do it as a final revenge.)
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven’t thought about it much.
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness… the user will simply notice that X upvotes them often and Y downvotes them often… they will start liking X and disliking Y… they will start getting pleasant feelings when looking at comments written by X (“my friend is writing here, I feel good”) and unpleasant feelings when looking at comments written by Y (“oh no, my nemesis again”)… and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other people would be suspected of being vengeful about downvotes, people would be more willing to downvote the former, which creates incentives for passively aggressive behavior. (I am talking about a situation where everyone suspects that X downvotes those who downvoted him, but X can plausibly deny doing that, claiming he genuinely disliked all the stuff he downvoted, and you can either have an infinite debate about it with X acting outraged about unfair accusations, or just let it slide but still everyone knows that downvoting X is bad for their own karma.)
tl;dr—the same reasons why the elections are secret
EDIT:
After reading Raemon’s comment I am less sure about what I wrote here. I still believe that public upvotes and downvotes can cause unnecessary drama, but maybe that would still be an improvement over the situation when a reasonable comment gets 10 downvotes from sockpuppet accounts, or someone gets one downvote for each comment including those written years ago, and it is not clearly visible what exactly is happening unless moderators get involved (and sometimes not even then).
On the other hand, I believe that some content (too stupid, or aggressive) should be removed from the debate. Maybe not deleted completely, but at least hidden by default (such as currently the comments with karma −5 or less). But I agree that this should not apply to not-completely-insane comments posted by newbies in good faith. Such comments should be merely sorted to the bottom of the page. What should be removed is violations of community norms, and “spamming” (i.e. trying to win a debate by quantity of comments that don’t bring new points, merely inflate the visibility of the already expressed ones).
At this moment I am imagining some kind of hybrid system, where upvotes (either private or public, no clear opinion on this yet) would be given freely, but downvotes could only be given for specific reasons (they would be equivalent to flagging) and in case of abuse the user could lose the ability to downvote (i.e. the downvotes would be either public, or at least visible to moderators).
And here is a quick fix idea: as the first step, make downvotes public for moderators. That would at least allow them to quickly detect and remove Eugine’s sockpuppets. -- For example, moderator could have a new button below each comment, which would display the list of downvoters (with hyperlinks to their user pages). Also, make a script that reverts all votes given by a user, and make it easily accessible from the “banned users” admin page (i.e. it can only be applied to already banned users). To help other moderators spot possible abuse, the name of the moderator who started the script for a user could be displayed on the same admin page. (For extra precaution, the “revert all votes” button could be made inaccessible for the moderator who banned the user, so at least two moderators must participate at a vote purge.)
It’s not actually obvious to me that downvotes are even especially useful. I understand what purpose they’re supposed to serve, but I’m not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers—I’ve heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Alternative, go with the Hacker News model of only enabling downvotes after you’ve accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off ‘comment score below threshold’ hiding, etc)
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook.
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
Actual spam could just be reported rather than downvoted
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying “this comment is unnecessarily rude”, which would also effectively halve the number of likes for the purpose of comment sorting.)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your “fans” vs “enemies” are, you will inevitably try to play to your “fans” and develop dislike for your “enemies.” I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Could you elaborate on what you mean by this? “Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted” also sounds like “playing to your fans” to me—in both cases you’re responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren’t entirely sure of who your target audience consists of.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn’t matter if there is a setting that makes the likes invisible to you in particular? It’s a tradeoff, some things may get worse, others may get better. I don’t have a clear sense of this tradeoff.
Imagine that you’re a new person who’s a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you.
The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have.
OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.
the tech support doesn’t give a fuck, and will cite privacy concerns when you ask them for more direct access to the database
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? …Why do they then get their contract renewed?
The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.
Yeah, it’s a bit of “don’t look a gift horse in the mouth” situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly… what are you going to do? It’s not like you can threaten to fire them, right?
In hindsight, I did a few big mistakes there. I didn’t call Eliezer to have an open debate about what exactly is and isn’t in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again… or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation… until I gradually developed a huge “ugh field” around the whole topic… and wasted a lot of time… and then other people took the role and had to start from the beginning again.
If we built it, would they come? You make a strong case that the workforce wasn’t made able to do the job; if that were fixed, would the workforce show up?
I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:
“A tangential note on third-party technical contributions to LW (if that’s a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.” (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)
That’s obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there’s no one person with both the technical and moral authority to:
set the rules that all participants have to abide by, and enforce them
decide principles for what’s on-topic and what’s off-topic
receive reports of trolls, and warn or ban them
respond to complaints about the site not working well
decide what the site features should be, and implement the high-priority ones
Pretty much any successful subreddit, even smallish ones, will have a team of admins who handle this stuff, and who can be trusted to look at things that pop up within a day or so (at least collectively). The highest intellectual-quality subreddit I know of, /r/AskHistorians, has extremely active and rigorous moderation, to the extent that a majority of comments are often deleted. Since we aren’t on Reddit itself, I don’t think we need to go quite that far, but there has to be something in place.
Which needs to be backed up by a responsive tech support team. Without the support of the tech support, the moderators are only able to do the following:
1) remove individual comments; and
2) ban individual users.
It seems like a lot of power, but for example when you deal with someone like Eugine, it is completely useless. All you can do is play whack-a-mole with banning his obvious sockpuppet accounts. You can’t even revert the downvotes made by those accounts. You can’t detect the sockpuppets that don’t post comments (but are used to upvote the comments made by the active sockpuppets, which then quickly use their karma to mod-bomb the users Eugine doesn’t like). So, all you can do is to delete the mod-bombing accounts after the damage was done. What’s the point? It will cost Eugine about 10 seconds to create a new one.
(And then Eugine will post some paranoid rant about how you have some super shady moderator powers, and a few local useful idiots will go like “yeah, maybe the mods are too poweful, we need to stop them”, and you keep banging your head against the wall in frustration, wishing you actually had a fraction of those power Eugine accuses you of having.)
As the situation is now, the moderators are completely powerless to prevent or even reduce Eugine’s brigading, and the tech support doesn’t give a fuck, and will cite privacy concerns when you ask them for more direct access to the database. At least that is my experience, as a former moderator. Appointing a new moderator, or even hundred new moderators, would not change anything about this, unless they get a direct access to the data, or a more supportive tech support.
EDIT:
And before the problem is fixed, what good will it do to send new users here? First, Eugine will automatically downvote all women. Second, Eugine will downvote anyone who disagrees with him. It’s fucking motivating to write for a website where an obsessed user can de facto single-handedly remove all your content and/or moderate the whole discussion about it. And everyone is just looking away and pretending that this doesn’t happen, and the real problem is… whatever else.
Come on, if LW is unable to enforce a ban of a single person blatantly abusing the rules and harrassing many users who actually contributed or wanted to contribute some quality content… the solution certainly isn’t to keep telling more people to come and contribute. Let’s finally talk about the elephant in the room.
(Mentioning the elephant in the room will get your comment immediately downvoted to −10 though. Just saying.)
Was including tech support under “admin/moderation”—obviously, ability to eg. IP ban people is important (along with access to the code and the database generally). Sorry for any confusion.
That’s okay, I just posted to explain the details, to prevent people from inventing solutions that predictably couldn’t change anything, such as: appoint new or more moderators. (I am not saying more help wouldn’t be welcome, it’s just that without better access to data, they also couldn’t achieve much.)
Wow, that is a pretty big issue. Thank you for mentioning this.
Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don’t realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.
What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
When I was doing the job, I would have appreciated having an anonymized offline copy of the database; specifically the structure of votes.
Anonymized to protect me from my own biases: replacing the user handles with random identifiers, so that I would first have to make a decision “user xyz123 is abusing the voting mechanism” or “user xyz123 is a sockpuppet for user abc789″, describe my case to other mods, and only after getting their agreement I would learn who the “user xyz123” actually is.
(But of course, getting the database without anonymization—if that would be faster—would be equally good; I could just anonymize it after I get it.)
Offline so that I could freely run there any computations I imagine, without increasing bills for hosting. Also, to have it faster, not be limited by internet bandwidth, and to be free to use any programming language.
What specific computations would I run there? Well, that’s kinda the point that I don’t know in advance. I would try different heuristics, and see what works. Also, I suspect there would have to be some level of “security by obscurity”, to avoid Eugine adjusting to my algorithms. (For example, if I would define karma-assassination as “a user X downvoted all comments by user Y” and make the information public, Eugine could simply downvote all comments but one, to avoid detection. Similarly, if sockpuppeting would be defined as “a user X posts no comments, and only upvotes everything but user Y”, Eugine could make X post exactly one comment, and upvote one random comment by someone else. The only way to make this game harder for the opponent is not to make the heuristics public. They would be merely explained to other moderators.)
So I would try different definitions of “karma assassination” and different definitions of “sockpuppets”, see what the algorithm reports, and whether looking at the reported data again matches my original intuition. (Maybe the algorithm reports too much, because e.g. if a user posted only one comment on LW, then downvoting his comment was detected as “downvoting all comments from a given user”, although I obviously didn’t have that in mind. Or maybe there was a spammer, and someone downvoted all his comments perfectly legitimately.)
Then the next step would be, as long as I believe I have a correct algorithm, to set up a script for monitoring the database, and reporting me the kind of behavior that matches the heuristic automatically. This is because I believe that investigating things reported by users is already too late, and introduces biases. Some people will not report karma assassination, because they will mistake it for genuine dislike by the community; especially the new users intimidated by the website. On the other hand, some people will report every single organic downvote, even if they well deserved it. I have seen both cases during my role. It’s better if an algorithm reports suspicious behavior. (The existing data would be used to define and test heuristics about what “suspicious behavior” is.)
That would have been what I wanted. However, Vaniver may have completely different ideas, and I am not speaking for him. Now it’s already too late for me; I have a new job and a small baby, not enough free time to spend examining patterns of LW data. Two years ago, I would have the time.
(Another thing is, the voting model has a few obvious security holes. I would need some changes in the voting mechanism implemented, preferably without having a long public debate about how exactly the current situation can be abused to take over the website by a simple script. If I had a free weekend, I could write a script that would nuke the whole website. If Eugine has at least average programming skills, he can do this too; and if we start an arms race against him, he may be motivated to do it as a final revenge.)
It is actually not obvious to me that we gain by having upvotes/downvotes be private (rather than having it visible to readers who upvoted or downvoted which post, as on Facebook). But I haven’t thought about it much.
If upvotes/downvotes are public, some people are going to reward/punish those who upvoted/downvoted them.
It can happen without full awareness… the user will simply notice that X upvotes them often and Y downvotes them often… they will start liking X and disliking Y… they will start getting pleasant feelings when looking at comments written by X (“my friend is writing here, I feel good”) and unpleasant feelings when looking at comments written by Y (“oh no, my nemesis again”)… and that will be reflected by how they vote.
And this is the charitable explanation. Some people will do this with full awareness, happy that they provide incentives for others to upvote them, and deterrence to those who downvote. -- Humans are like this.
Even if the behavior described above would not happen, people would still instinctively expect it to happen, so it would still have a chilling effect. -- On the other hand, some people might enjoy to publicly downvote e.g. Eliezer, to get contratian points. Either way, different forms of signalling would get involved.
From the view of game theory, if some people would have a reputation to be magnanimous about downvotes, and other people would be suspected of being vengeful about downvotes, people would be more willing to downvote the former, which creates incentives for passively aggressive behavior. (I am talking about a situation where everyone suspects that X downvotes those who downvoted him, but X can plausibly deny doing that, claiming he genuinely disliked all the stuff he downvoted, and you can either have an infinite debate about it with X acting outraged about unfair accusations, or just let it slide but still everyone knows that downvoting X is bad for their own karma.)
tl;dr—the same reasons why the elections are secret
EDIT:
After reading Raemon’s comment I am less sure about what I wrote here. I still believe that public upvotes and downvotes can cause unnecessary drama, but maybe that would still be an improvement over the situation when a reasonable comment gets 10 downvotes from sockpuppet accounts, or someone gets one downvote for each comment including those written years ago, and it is not clearly visible what exactly is happening unless moderators get involved (and sometimes not even then).
On the other hand, I believe that some content (too stupid, or aggressive) should be removed from the debate. Maybe not deleted completely, but at least hidden by default (such as currently the comments with karma −5 or less). But I agree that this should not apply to not-completely-insane comments posted by newbies in good faith. Such comments should be merely sorted to the bottom of the page. What should be removed is violations of community norms, and “spamming” (i.e. trying to win a debate by quantity of comments that don’t bring new points, merely inflate the visibility of the already expressed ones).
At this moment I am imagining some kind of hybrid system, where upvotes (either private or public, no clear opinion on this yet) would be given freely, but downvotes could only be given for specific reasons (they would be equivalent to flagging) and in case of abuse the user could lose the ability to downvote (i.e. the downvotes would be either public, or at least visible to moderators).
And here is a quick fix idea: as the first step, make downvotes public for moderators. That would at least allow them to quickly detect and remove Eugine’s sockpuppets. -- For example, moderator could have a new button below each comment, which would display the list of downvoters (with hyperlinks to their user pages). Also, make a script that reverts all votes given by a user, and make it easily accessible from the “banned users” admin page (i.e. it can only be applied to already banned users). To help other moderators spot possible abuse, the name of the moderator who started the script for a user could be displayed on the same admin page. (For extra precaution, the “revert all votes” button could be made inaccessible for the moderator who banned the user, so at least two moderators must participate at a vote purge.)
It’s not actually obvious to me that downvotes are even especially useful. I understand what purpose they’re supposed to serve, but I’m not sure they actually serve it.
It seems like if we removed them, a major tool available to trolls is just gone.
I think downvoting is also fairly punishing for newcomers—I’ve heard a few people mention they avoided Less Wrong due to worry about downvoting.
Good vs bad posts could be discerned just by looking at total likes, the way it is on facebook. Actual spam could just be reported rather than downvoted, which triggers mod attention but has not visible effect.
Alternative, go with the Hacker News model of only enabling downvotes after you’ve accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.
I agree. In addition to the numerous good ideas suggested in this tree, we could also try the short term solution of turning off all downvoting for the next 3 months. This might well increase population.
(Or similar variants like turning off ‘comment score below threshold’ hiding, etc)
Preferably also sorted by the number of total likes. Otherwise the only difference between a comment with 1 upvote and 15 upvotes is a single character on screen that requires some attention to even notice.
There are some kinds of behavior which in my opinion should be actively discouraged, besides spam. Stubborn stupidity, or verbal aggressivity towards other debaters. It would be nice to have a mechanism to do something about them, preferably without getting moderators involved. But maybe those could also be flagged, and maybe moderators should have a way to attach a warning to the comment without removing it completely. (I imagine a red text saying “this comment is unnecessarily rude”, which would also effectively halve the number of likes for the purpose of comment sorting.)
I think that upvotes/downvotes being private has important psychological effects. If you can get a sense of who your “fans” vs “enemies” are, you will inevitably try to play to your “fans” and develop dislike for your “enemies.” I think this is the primary thing that makes social media bad.
My current cutoff for what counts as a “social media” site (I have resolved to never use social media again) is “is there a like mechanic where I can see who liked me?” If votes on LW were public, by that rule, I’d have to quit.
Could you elaborate on what you mean by this? “Posting different kinds of articles on LW and writing more of the kind of stuff that gets upvoted” also sounds like “playing to your fans” to me—in both cases you’re responding to feedback and (rationally) tailoring your content towards your preferred target audience, even though in the LW case, you aren’t entirely sure of who your target audience consists of.
Do you mean that the group dynamic itself changes for the worse if likes are visible to those who want to see them, so that it doesn’t matter if there is a setting that makes the likes invisible to you in particular? It’s a tradeoff, some things may get worse, others may get better. I don’t have a clear sense of this tradeoff.
Imagine that you’re a new person who’s a little shy about the forum, but has read a large part of the Sequences and really thinks that Eliezer is awesome, and then you make your first post and see that Eliezer himself has downvoted you.
The psychological impact of that downvote would likely be a lot bigger than the impact of what a single downvote should have.
OTOH, making upvotes public would probably be a good change: seeing a list of people who upvoted you feels a lot more motivating to me than just getting an anonymous number.
Seriously, who are these tech support people? Clearly this database belongs to the owner of less wrong (whoever that is). As far as I can tell, when moderators ask for data, they ask on behalf of the owners of that data. What is going on here? Has tech support gone rogue ? Why do they then get their contract renewed? Are they taking orders from some secret deep owners of LW that outrank the moderators ?
The tech support is Trike Apps, who have freely donated a huge amount of programmer time toward building and maintaining LessWrong.
Yeah, it’s a bit of “don’t look a gift horse in the mouth” situation. When someone donates a lot of time and money to you, and suddenly becomes evasive or stubborn about some issue that is critical to be solved properly… what are you going to do? It’s not like you can threaten to fire them, right?
In hindsight, I did a few big mistakes there. I didn’t call Eliezer to have an open debate about what exactly is and isn’t in my competence; that is, in case of different opinions about what should be done, who really has the last word. Instead I gave up too soon, when one my ideas was rejected I tried to find an alternative solution, only to have it rejected again… or to finally succeed at something, and then see that Eugine improved his game, and now I am going to have another round of negotiation… until I gradually developed a huge “ugh field” around the whole topic… and wasted a lot of time… and then other people took the role and had to start from the beginning again.
If we built it, would they come? You make a strong case that the workforce wasn’t made able to do the job; if that were fixed, would the workforce show up?