Look at a random LW thread, or perhaps this one. Comments with positive karma are many, comments with negative karma are rare. (Someone could make a script to look at the latest N articles and determine the exact ratio, but I’m to lazy.)
Maybe that just means that we have a smart and civilized discussion here, so the system is working as intended—people upvote more than downvote because they are satisfied more often than dissatisfied.
The more I think about it, the more it seems to me that the problem is that the karma system was not designed to prevent this kind of abuse (downvote-bombing an enemy), so it is vulnerable here… but that the proposed solutions would be vulnerable to other kinds of abuse. (What happened with holding off on proposing solutions?) Perhaps we should start by declaring the properties we want the system to have, listing a few examples of possible abuse, and then could try designing a system that has the desired properties and can resist the abuse. Maybe we don’t even agree on what those properties are.
For example: Really bad content (disliked by most people) should be hidden before everyone has to read it. People who write really bad content should be prevented from writing much. On the other hand, the system should not allow one person to “destroy” their enemy, if other people have no problem with what the person writes. It shouldn’t be possible to get more power merely by creating a dozen sockpuppet accounts. Etc.
The current system is not perfect, but it seems to be close to these properties (more than many other websites). For example, even the downvote-bomber can give you only one downvote per comment, so if your average comment karma is greater than one, you will survive. And if your comments are good, then perhaps instead of a person who stupidly downvoted them, we should blame the people who liked the comments, but didn’t upvote. -- I see an analogy to a country where a majority of people refuses to vote, and then they are unhappy about the results of the election. But unlike this political analogy, you don’t vote for an existing party (which may all suck); you vote directly on the comments. So if most people who like something remain quiet, and the majority who dislikes it expresses their opinion, who exactly is to blame, or what can we do to improve the situation? I feel we shouldn’t go as far as to say that even a little liking always trumps any amount of disliking (which is what removing downvotes means). Not sure that making 1 like equal to 2 or 5 or 10 dislikes solves the problem; it feel to me like solving the wrong problem. Maybe it’s just that when most readers refuse to provide a signal, we can’t just magically create it from the noise.
If what we know is that user A liked a comment, and user B disliked it, should we try to statistically detect the possibility that “actually 20 users liked the comment, but 19 didn’t bother voting, only A did; and the downvote was actually a result of B’s personal grudge against the author, unrelated to the specific comment… and therefore this comment should be highlighted”? -- Actually, if we could detect this somehow, reliably, maybe we should. At least, it could be worth trying. I mean, if we could extract that information, then why not use it? It’ could be easier than trying to change human nature. But such a solution, if possible, would require math, not just a random idea. So we should approach it as a serious mathematical problem, create models, test algorithms on them, etc.
Look at a random LW thread, or perhaps this one. Comments with positive karma are many, comments with negative karma are rare. (Someone could make a script to look at the latest N articles and determine the exact ratio, but I’m to lazy.)
Maybe that just means that we have a smart and civilized discussion here, so the system is working as intended—people upvote more than downvote because they are satisfied more often than dissatisfied.
The more I think about it, the more it seems to me that the problem is that the karma system was not designed to prevent this kind of abuse (downvote-bombing an enemy), so it is vulnerable here… but that the proposed solutions would be vulnerable to other kinds of abuse. (What happened with holding off on proposing solutions?) Perhaps we should start by declaring the properties we want the system to have, listing a few examples of possible abuse, and then could try designing a system that has the desired properties and can resist the abuse. Maybe we don’t even agree on what those properties are.
For example: Really bad content (disliked by most people) should be hidden before everyone has to read it. People who write really bad content should be prevented from writing much. On the other hand, the system should not allow one person to “destroy” their enemy, if other people have no problem with what the person writes. It shouldn’t be possible to get more power merely by creating a dozen sockpuppet accounts. Etc.
The current system is not perfect, but it seems to be close to these properties (more than many other websites). For example, even the downvote-bomber can give you only one downvote per comment, so if your average comment karma is greater than one, you will survive. And if your comments are good, then perhaps instead of a person who stupidly downvoted them, we should blame the people who liked the comments, but didn’t upvote. -- I see an analogy to a country where a majority of people refuses to vote, and then they are unhappy about the results of the election. But unlike this political analogy, you don’t vote for an existing party (which may all suck); you vote directly on the comments. So if most people who like something remain quiet, and the majority who dislikes it expresses their opinion, who exactly is to blame, or what can we do to improve the situation? I feel we shouldn’t go as far as to say that even a little liking always trumps any amount of disliking (which is what removing downvotes means). Not sure that making 1 like equal to 2 or 5 or 10 dislikes solves the problem; it feel to me like solving the wrong problem. Maybe it’s just that when most readers refuse to provide a signal, we can’t just magically create it from the noise.
If what we know is that user A liked a comment, and user B disliked it, should we try to statistically detect the possibility that “actually 20 users liked the comment, but 19 didn’t bother voting, only A did; and the downvote was actually a result of B’s personal grudge against the author, unrelated to the specific comment… and therefore this comment should be highlighted”? -- Actually, if we could detect this somehow, reliably, maybe we should. At least, it could be worth trying. I mean, if we could extract that information, then why not use it? It’ could be easier than trying to change human nature. But such a solution, if possible, would require math, not just a random idea. So we should approach it as a serious mathematical problem, create models, test algorithms on them, etc.