The next wave of people finds the downvoting excessive and upvote in response.
I think such people may be more harmful to the voting system than the usual vote manipulation.
Your vote should express whether you want to see more of something or less of something on LessWrong. Not to be used strategically to counter other people’s votes. Then not only you don’t contribute to the system, but also remove other people’s contributions. What is it exactly you aim for? A webpage where no one will bother to downvote annoying content, because they will know someone else will immediately upvote it back?
You should upvote only those comments you would upvote regardless of their current score.
I disagree, that is, I think it is reasonable to upvote or downvote “strategically.” I agree with the proposed motive (how much of this kind of content do you want to see), but e.g. if I see a comment which I think is not particularly bad, but also not particularly good, so I don’t care to increase or decrease the amount of it on Less Wrong, then I will upvote that comment if I see it downvoted, and might very well downvote it if I see it upvoted.
If I see a comment downvoted to −2 or −3, and I would like to see less of it on Less Wrong, that does not necessarily mean I should downvote it again, since this could result in not seeing such comments at all, which is not necessarily what I want. I want there to be less content like that, but not none at all.
In other words, I agree with your proposed goal, but I think strategic voting is a reasonable means of attaining that goal.
if I see a comment which I think is not particularly bad, but also not particularly good, so I don’t care to increase or decrease the amount of it on Less Wrong, then I will upvote that comment if I see it downvoted, and might very well downvote it if I see it upvoted.
I may be misunderstanding what you wrote, but it seems to me you just said that if you have no genuine preference for having more or less of some kind of content, your second preference is to negate the expressed preferences of other LW readers.
If too many have voted to see less of X, you vote for more X, not because you literally want “more X”, but because you want “more of what many other people don’t want”. And if too many have voted to see more of X, you vote for less X, again not because you literally want “less X”, but because you want “less of what many other people want”.
So, essentially, your preference is that other people get less of what they want, and more of what they don’t want?
I do the same thing, but the preference for me is really “The vote score should be in proportion to how much I think the post adds to the discussion.” If it’s at −10, but I think it adds a little to the discussion (or only takes away a little) I’ll upvote, because the score is out of proportion with the value it provides or takes away. If a comment is at +100 but only adds a little to the discussion, I’ll downvote.
A consequence of this is that the total score of a comment depends on the order of voting.
For example, if your algorithm is “upvote below 5, downvote above 5”, and ten other people want to upvote unconditionally, then the final score may be 11 or 9 depending on whether you voted first or last.
A consequence of voting unconditionally is that you’ll contribute to comments being higher than you think they deserve. All scoring rules have tradeoffs.
I think I disagree with the idea that a comment deserves a specific number of votes.
Comment karma is “the number of people who liked it, and cared enough to click the button, minus the number of people who disliked it, and cared enough to click the button”.
What does it mean to say that a comment deserves that the result should be e.g. five? Downvoting a comment strategically is like saying “this is a nice comment, but it doesn’t deserve more than five people to like it; and because six people said they like it, I am saying that I dislike it, just so that it gets the result it deserves”.
It might be worth a poll to find out whether people think posts “deserve” a certain number (or number in a small range) of comments.
I’m not sure that sort of voting makes sense, but I do a little of it myself. I’m guessing that “justice” based voting stabilizes the value of karma, and otherwise it would take increasingly high numbers of votes to indicate that a post is unusually good.
What does it mean to say that a comment deserves that the result should be e.g. five?
It actually doesn’t mean anything if there’s only one comment. But the way LW works is that if there’s one comment with 5, and another with 6, the one with 6 gets displayed first and read by more people.
I think your scoring rules makes more sense in a binary “vote yes or no” democracy. If you’re trying to decide whether you should or shouldn’t enact a policy, and if there are more negative than positive votes then the policy is enacted, you should yes vote if you agree with the policy and no if you disagree.
But in a meritocratic system like LW, where individual posts are ranked against each other based on score, this results in “pretty good comments” getting ranked the same level as “really good comments”.
It is not a question of opposing other people’s preferences. It is question of taking the actions that will most likely result in the situation which is closest to the one I want. For example, in the first case, I meant that I do not want that amount of the content either increased or decreased. I do not mean that I do not care. I mean I like things the way they are. If the comment is at −1, I will likely start to see less of it. Since I do not want it increased or decreased, I upvote it.
That certainly does not mean that I want to increase anything just because other people want less of it, or decrease anything because they want more of it.
It is not a question of opposing other people’s preferences. It is question of taking the actions that will most likely result in the situation which is closest to the one I want.
But the mechanism by which you do so is opposing other people’s preferences. That is, if there’s a comment that I want to be at net 0, then upvoting it if it’s at −1 or downvoting it if it’s at +1 accomplishes that goal, but which one I do depends on what the community consensus was at the time of voting.
In general, I think voting based on current karma decreases the info content of voting and harms more than it helps. Vote on your desire to see or not see a comment, not your desire for the community to want to see or not want to see the comment!
I agree that establishing the general claim that voting based on current karma harms more than it helps requires more than the first paragraph, and is just a statement of a conclusion rather than an argument leading to that conclusion.
But I think the rest of the second paragraph is related to the first—the reason why it decreases the info content of voting is because the votes are clashing (your vote on a comment is now negatively correlated with my vote, making your vote less influential).
I also don’t think the first claim makes much sense. First of all, it’s not always anti correlated. It’s only anti-correlated if you vote unconditionally, and the post is far below or far above the value we think it provides. If it’s positive, but not positive enough, the vote is correlated. If it’s negative, but not negative enough, the vote is correlated.
Secondly, you’re assuming everyone uses the same scoring rule you do. We’ve already established that at least two people use the different scoring rule, and as another commenter pointed out, it’s likely that there are many people who vote strategically. In that case, if we think the post has the same value, we’d do the same thing in the same situation, and if we think it doesn’t ahve the same value, they’re not—which is how it should be.
Your vote should express whether you want to see more of something or less of something on LessWrong.
That’s one possible interpretation of voting on LW. It is not the only one possible. Do you think one can apply terms like “correct” or “wrong” to these interpretations?
Some people think in terms of people behind the comments and not comments themselves. They think that downvotes cause sadness for a person who was downvoted and they use their upvote as a consolation, as an attempt to cheer a downvoted person up.
“Strategic” voting is pretty much unavoidable, since voting has some cost (however mild). It makes sense to vote when you think it will make a useful contribution, by expressing a different POV than other LessWrong contributors would. Does this make scores less representative? It’s not clear that it does—how many people would care if some unambiguously good comment is at, say, +17 as opposed to +19 because some users just didn’t bother to vote it up?
I think such people may be more harmful to the voting system than the usual vote manipulation.
Your vote should express whether you want to see more of something or less of something on LessWrong. Not to be used strategically to counter other people’s votes. Then not only you don’t contribute to the system, but also remove other people’s contributions. What is it exactly you aim for? A webpage where no one will bother to downvote annoying content, because they will know someone else will immediately upvote it back?
You should upvote only those comments you would upvote regardless of their current score.
I disagree, that is, I think it is reasonable to upvote or downvote “strategically.” I agree with the proposed motive (how much of this kind of content do you want to see), but e.g. if I see a comment which I think is not particularly bad, but also not particularly good, so I don’t care to increase or decrease the amount of it on Less Wrong, then I will upvote that comment if I see it downvoted, and might very well downvote it if I see it upvoted.
If I see a comment downvoted to −2 or −3, and I would like to see less of it on Less Wrong, that does not necessarily mean I should downvote it again, since this could result in not seeing such comments at all, which is not necessarily what I want. I want there to be less content like that, but not none at all.
In other words, I agree with your proposed goal, but I think strategic voting is a reasonable means of attaining that goal.
I may be misunderstanding what you wrote, but it seems to me you just said that if you have no genuine preference for having more or less of some kind of content, your second preference is to negate the expressed preferences of other LW readers.
If too many have voted to see less of X, you vote for more X, not because you literally want “more X”, but because you want “more of what many other people don’t want”. And if too many have voted to see more of X, you vote for less X, again not because you literally want “less X”, but because you want “less of what many other people want”.
So, essentially, your preference is that other people get less of what they want, and more of what they don’t want?
I do the same thing, but the preference for me is really “The vote score should be in proportion to how much I think the post adds to the discussion.” If it’s at −10, but I think it adds a little to the discussion (or only takes away a little) I’ll upvote, because the score is out of proportion with the value it provides or takes away. If a comment is at +100 but only adds a little to the discussion, I’ll downvote.
A consequence of this is that the total score of a comment depends on the order of voting.
For example, if your algorithm is “upvote below 5, downvote above 5”, and ten other people want to upvote unconditionally, then the final score may be 11 or 9 depending on whether you voted first or last.
A consequence of voting unconditionally is that you’ll contribute to comments being higher than you think they deserve. All scoring rules have tradeoffs.
I think I disagree with the idea that a comment deserves a specific number of votes.
Comment karma is “the number of people who liked it, and cared enough to click the button, minus the number of people who disliked it, and cared enough to click the button”.
What does it mean to say that a comment deserves that the result should be e.g. five? Downvoting a comment strategically is like saying “this is a nice comment, but it doesn’t deserve more than five people to like it; and because six people said they like it, I am saying that I dislike it, just so that it gets the result it deserves”.
It might be worth a poll to find out whether people think posts “deserve” a certain number (or number in a small range) of comments.
I’m not sure that sort of voting makes sense, but I do a little of it myself. I’m guessing that “justice” based voting stabilizes the value of karma, and otherwise it would take increasingly high numbers of votes to indicate that a post is unusually good.
It actually doesn’t mean anything if there’s only one comment. But the way LW works is that if there’s one comment with 5, and another with 6, the one with 6 gets displayed first and read by more people.
I think your scoring rules makes more sense in a binary “vote yes or no” democracy. If you’re trying to decide whether you should or shouldn’t enact a policy, and if there are more negative than positive votes then the policy is enacted, you should yes vote if you agree with the policy and no if you disagree.
But in a meritocratic system like LW, where individual posts are ranked against each other based on score, this results in “pretty good comments” getting ranked the same level as “really good comments”.
You can change your vote later if necessary, and sometimes I do, either to no vote at all, or to the opposite vote.
It is not a question of opposing other people’s preferences. It is question of taking the actions that will most likely result in the situation which is closest to the one I want. For example, in the first case, I meant that I do not want that amount of the content either increased or decreased. I do not mean that I do not care. I mean I like things the way they are. If the comment is at −1, I will likely start to see less of it. Since I do not want it increased or decreased, I upvote it.
That certainly does not mean that I want to increase anything just because other people want less of it, or decrease anything because they want more of it.
But the mechanism by which you do so is opposing other people’s preferences. That is, if there’s a comment that I want to be at net 0, then upvoting it if it’s at −1 or downvoting it if it’s at +1 accomplishes that goal, but which one I do depends on what the community consensus was at the time of voting.
In general, I think voting based on current karma decreases the info content of voting and harms more than it helps. Vote on your desire to see or not see a comment, not your desire for the community to want to see or not want to see the comment!
I don’t think your second paragraph follows from your first.
I agree that establishing the general claim that voting based on current karma harms more than it helps requires more than the first paragraph, and is just a statement of a conclusion rather than an argument leading to that conclusion.
But I think the rest of the second paragraph is related to the first—the reason why it decreases the info content of voting is because the votes are clashing (your vote on a comment is now negatively correlated with my vote, making your vote less influential).
I also don’t think the first claim makes much sense. First of all, it’s not always anti correlated. It’s only anti-correlated if you vote unconditionally, and the post is far below or far above the value we think it provides. If it’s positive, but not positive enough, the vote is correlated. If it’s negative, but not negative enough, the vote is correlated.
Secondly, you’re assuming everyone uses the same scoring rule you do. We’ve already established that at least two people use the different scoring rule, and as another commenter pointed out, it’s likely that there are many people who vote strategically. In that case, if we think the post has the same value, we’d do the same thing in the same situation, and if we think it doesn’t ahve the same value, they’re not—which is how it should be.
That’s one possible interpretation of voting on LW. It is not the only one possible. Do you think one can apply terms like “correct” or “wrong” to these interpretations?
Some people think in terms of people behind the comments and not comments themselves. They think that downvotes cause sadness for a person who was downvoted and they use their upvote as a consolation, as an attempt to cheer a downvoted person up.
“Strategic” voting is pretty much unavoidable, since voting has some cost (however mild). It makes sense to vote when you think it will make a useful contribution, by expressing a different POV than other LessWrong contributors would. Does this make scores less representative? It’s not clear that it does—how many people would care if some unambiguously good comment is at, say, +17 as opposed to +19 because some users just didn’t bother to vote it up?