I would hate to see LW close and I don’t think that would be a helpful step in getting people exposed to rationality unless a new central hub rose to take its place. I found LW through HPMOR just this year and have very little idea of what LW looked in it’s supposed glory days. Things aren’t great now, but if LW had been completely dead I likely wouldn’t have moved from wanting to be rational to reading 600+ pages of Rationality:From AI to Zombies, making tons of connections and rationalist friends, attending CFAR, starting a LW meetup in my area, and more. A completely dead website would have given the impression of a dead philosophy that was abandoned by the people who followed it because it wasn’t actually that useful after all.
Decreases to the level of polish, rigor, and rationality knowledge publicly deemed necessary before posting in the various areas could be helpful (in current LW or a LW 2.0). I mainly post in Open and Stupid Question threads because of this.
People here can be pretty cold and harsh in their replies. I’ve also heard of issues regarding downvote brigades or mass downvoting of people’s posts due to personal disagreements. If this place really is full of “unquiet spirits” then a method of removing them, discouraging that kind of conduct, or changing them into kind benevolent spirits should be included in the works.
I think that “be bold” and “ignore karma” cash out very differently, and while I mostly agree with “be bold” I mostly disagree with “ignore karma.”
Karma is a good mechanism for directing attention and for providing quick, anonymous feedback; if we required everyone to to write a comment publicly lauding or shaming posts and comments instead of voting, we would get much less in the way of feedback because it requires much more in the way of attention and risk. If someone is consistently getting downvotes, they are most likely consistently doing something wrong.
I do think that an important part of making LW more useful is making the karma signal better; the votes are only as good as the people casting them.
Karma only gives you one bit of feedback per person voting. A [+] or [-], that’s it. We can probably do better. Even so, it’s much better than nothing.
I don’t have time to read every single comment when there are hundreds to sift through, but I can read the important ones. The only way to find the important ones without reading everything is through karma. For example, SSC posts can get a comparable number of comments, but I’ve given up reading them.
Adding even a few more bits per person to the signal could improve quality a lot. On the other hand, simplicity is one of the karma system’s strong points. The low effort required encourages participation, as you pointed out. I don’t want to complicate the system too much, but I don’t think the current version is optimal.
We could take an approach similar to Google’s PageRank, so votes by high-karma people carry more weight. This wouldn’t require any more effort for participation than it does now. We could perhaps keep the current one-bit system for determining karma score in the first place, but we would be able to sort posts/comments by the weighted score.
I’m not sure how hard this would be to implement. The database must have enough information to do this, since it tracks who made each vote. I’m also not sure how to set up the weighting function, but this sounds like a job for Bayesian methods—some of us are good at that, right? :)
Getting downvoted can be discouraging. People who get downvoted enough (or fear getting downvoted enough) may not participate. Sometimes this is a good thing (e.g. trolls). But in other cases, there could be people with important things to say, who could improve their quality with just a little guidance.
For anyone reading this, what are your usual reasons for downvoting?
Perhaps they fall into some common patterns we could enumerate. (Perhaps a fallacy or cognitive bias from the sequences?) If so, we could add flags for these common reasons to the comment system. Marking a flag would count as your downvote, but would provide much more valuable feedback to the commenter, and also to other newcomers. We could control these specific problems without discouraging participation as much as a simple “[-]. You’re wrong. About something.”, like we do now.
Listen, you have people like Eugene with their army of upvoting/downvoting sockpuppets, etc. Karma may have some signal (related to “what the community likes” which is neither here nor there) if it was policed, but it isn’t.
Listen, you have people like Eugene with their army of upvoting/downvoting sockpuppets, etc.
But is there an etc.? Sure, we have one not huge sockpuppet army which pops up once in a while, but besides that I don’t see karma as much abused here.
Of course the real question is whether it signifies anything except mob likes. In many areas allowing the likes to lead you is a really bad idea.
One simple approach is leaderboards, similar to Top Contributors, 30 Days, that’s something like Top Upvoters, 30 Days and Top Downvoters, 30 Days. This would give them a sense of who’s driving karma shifts (and would identify potential problems, like VoiceOfRa massdownvoting people, as they’re happening).
It seems likely these should only be available to the police (i.e. the mods)--you don’t want people voting just to make their score higher.
A more direct approach deals with the vote graph directly. Serial upvote and downvote detection, as done by Stack Overflow, relies on the graph, but sockpuppets are probably more noticeable by something like voting cliques or vote distributions. It would be interesting to take other people’s vote relationships with me (i.e. one person may have upvoted 100 of my posts and comments and downvoted 10 of them) and figure out what sort of distribution that takes on, and then see if there are users with odd distributions (of their votes on others or others votes on them). A well-known similar approach is Benford’s Law; if one has received a disproportionate number of votes from a small number of users, or given a disproportionate number to a small number of users, then it is likely that some sort of sockpuppetry is going on.
If you can run queries on the backend database, it’s rather easy to discover voting shenanigans. The problem, as I understand it, is that right now mods have to ask Trike specific questions and Trike isn’t speedy about getting back to them.
That said, if we can define the characteristics of some standard queries we would like exposed (for example, ” Top Upvoters, 30 Days” and “Top Downvoters, 30 Days” as Vaniver mentioned) Trike might be willing to expose those queries to LW admins.
Or they might not. The way to find out is to ask, but we should only bother asking if we actually want them to do so. So discussing it internally in advance of testing those limits seems sensible.
A few points:
I would hate to see LW close and I don’t think that would be a helpful step in getting people exposed to rationality unless a new central hub rose to take its place. I found LW through HPMOR just this year and have very little idea of what LW looked in it’s supposed glory days. Things aren’t great now, but if LW had been completely dead I likely wouldn’t have moved from wanting to be rational to reading 600+ pages of Rationality:From AI to Zombies, making tons of connections and rationalist friends, attending CFAR, starting a LW meetup in my area, and more. A completely dead website would have given the impression of a dead philosophy that was abandoned by the people who followed it because it wasn’t actually that useful after all.
Decreases to the level of polish, rigor, and rationality knowledge publicly deemed necessary before posting in the various areas could be helpful (in current LW or a LW 2.0). I mainly post in Open and Stupid Question threads because of this.
People here can be pretty cold and harsh in their replies. I’ve also heard of issues regarding downvote brigades or mass downvoting of people’s posts due to personal disagreements. If this place really is full of “unquiet spirits” then a method of removing them, discouraging that kind of conduct, or changing them into kind benevolent spirits should be included in the works.
I suggest ignoring karma.
I think that “be bold” and “ignore karma” cash out very differently, and while I mostly agree with “be bold” I mostly disagree with “ignore karma.”
Karma is a good mechanism for directing attention and for providing quick, anonymous feedback; if we required everyone to to write a comment publicly lauding or shaming posts and comments instead of voting, we would get much less in the way of feedback because it requires much more in the way of attention and risk. If someone is consistently getting downvotes, they are most likely consistently doing something wrong.
I do think that an important part of making LW more useful is making the karma signal better; the votes are only as good as the people casting them.
Karma only gives you one bit of feedback per person voting. A [+] or [-], that’s it. We can probably do better. Even so, it’s much better than nothing.
I don’t have time to read every single comment when there are hundreds to sift through, but I can read the important ones. The only way to find the important ones without reading everything is through karma. For example, SSC posts can get a comparable number of comments, but I’ve given up reading them.
Adding even a few more bits per person to the signal could improve quality a lot. On the other hand, simplicity is one of the karma system’s strong points. The low effort required encourages participation, as you pointed out. I don’t want to complicate the system too much, but I don’t think the current version is optimal.
We could take an approach similar to Google’s PageRank, so votes by high-karma people carry more weight. This wouldn’t require any more effort for participation than it does now. We could perhaps keep the current one-bit system for determining karma score in the first place, but we would be able to sort posts/comments by the weighted score.
I’m not sure how hard this would be to implement. The database must have enough information to do this, since it tracks who made each vote. I’m also not sure how to set up the weighting function, but this sounds like a job for Bayesian methods—some of us are good at that, right? :)
Getting downvoted can be discouraging. People who get downvoted enough (or fear getting downvoted enough) may not participate. Sometimes this is a good thing (e.g. trolls). But in other cases, there could be people with important things to say, who could improve their quality with just a little guidance.
For anyone reading this, what are your usual reasons for downvoting?
Perhaps they fall into some common patterns we could enumerate. (Perhaps a fallacy or cognitive bias from the sequences?) If so, we could add flags for these common reasons to the comment system. Marking a flag would count as your downvote, but would provide much more valuable feedback to the commenter, and also to other newcomers. We could control these specific problems without discouraging participation as much as a simple “[-]. You’re wrong. About something.”, like we do now.
Listen, you have people like Eugene with their army of upvoting/downvoting sockpuppets, etc. Karma may have some signal (related to “what the community likes” which is neither here nor there) if it was policed, but it isn’t.
But is there an etc.? Sure, we have one not huge sockpuppet army which pops up once in a while, but besides that I don’t see karma as much abused here.
Of course the real question is whether it signifies anything except mob likes. In many areas allowing the likes to lead you is a really bad idea.
It’s like cockroaches, for every one you see, how many do you not see?
Trust me, if you have a cockroach-infested kitchen, you know it, even if you don’t see that many roaches scurrying about :-/
Agreed.
It isn’t, yet.
How would you go about policing it?
One simple approach is leaderboards, similar to Top Contributors, 30 Days, that’s something like Top Upvoters, 30 Days and Top Downvoters, 30 Days. This would give them a sense of who’s driving karma shifts (and would identify potential problems, like VoiceOfRa massdownvoting people, as they’re happening).
It seems likely these should only be available to the police (i.e. the mods)--you don’t want people voting just to make their score higher.
A more direct approach deals with the vote graph directly. Serial upvote and downvote detection, as done by Stack Overflow, relies on the graph, but sockpuppets are probably more noticeable by something like voting cliques or vote distributions. It would be interesting to take other people’s vote relationships with me (i.e. one person may have upvoted 100 of my posts and comments and downvoted 10 of them) and figure out what sort of distribution that takes on, and then see if there are users with odd distributions (of their votes on others or others votes on them). A well-known similar approach is Benford’s Law; if one has received a disproportionate number of votes from a small number of users, or given a disproportionate number to a small number of users, then it is likely that some sort of sockpuppetry is going on.
If you can run queries on the backend database, it’s rather easy to discover voting shenanigans. The problem, as I understand it, is that right now mods have to ask Trike specific questions and Trike isn’t speedy about getting back to them.
That said, if we can define the characteristics of some standard queries we would like exposed (for example, ” Top Upvoters, 30 Days” and “Top Downvoters, 30 Days” as Vaniver mentioned) Trike might be willing to expose those queries to LW admins.
Or they might not. The way to find out is to ask, but we should only bother asking if we actually want them to do so. So discussing it internally in advance of testing those limits seems sensible.