It seems to me that your suggested policy would result in comment-placement effects being even stronger than they are now. What score should a comment end up with if 50 people consider voting on it and they all think it should have a score of +2?
I communicated poorly. I don’t think “should have a score of +2” should enter into the decision to upvote, downvote, or not vote. Instead, I’d rather voting algorithms which, when implemented individually, have results which can be meaningfully summed. For example, suppose everyone upvotes exactly when they think a comment is in the top 5% of comments in “everyone should read this” ordering and downvotes for the bottom 5%. Then the sum reflects the number of people who read the comment x (the average percentage of people who thought it was in the top 5% - bottom 5%). That’s something I can understand.
If I think a comment should end up with a score of +2, too bad, I have no direct way of controlling that. The resulting score is a reflection of the community’s votes, not something I try to game by altering my voting decision based on whether the score gets closer to +2.
I mean, do people downvote comments that they would have otherwise not voted on if they think the comment has too many upvotes? If not, why do they decline to upvote when they otherwise would have upvoted? The two look the same from everyone else’s perspective, right?
I’m not saying that your proposed algorithm is wrong—not exactly, anyway. I am pointing out something that I think is a flaw.
Putting the same point a different way:
Consider two comments. One is posted early, and is seen by 50 people. It’s slightly good—good enough that each of those people would, by your algorithm, upvote it, but no better than that. The other is posted late, and is only seen by 10 people, but it’s very, very good. According to your algorithm, the first one would get a score of +50 and the second one would get a score of +10. By the methods currently in use, the first one will get a low score—probably +1 or +2 - and the second one will still get +10.
The first comment got many more points than the second, by your algorithm, because its author was able to quickly put together something good enough to be upvoteable, and because they were at the right place at the right time to post it early in the conversation, which implies either luck or lots of time spent lurking on LW. I don’t think these are things we want to incentivise—at least not more than we want to incentivise putting time into crafting well-thought-out comments.
Also:
… do people downvote comments that they would have otherwise not voted on if they think the comment has too many upvotes?
You’re right. Reviewing my feelings on this I discovered that my main “ugh, that’s terrible” feeling comes from the observation that a correlated set of people form a control system that wipes out the contributions of others not in a similar or larger implicit alliance. That doesn’t imply the solution is to vote independently of the total, though, as there are negative side effects like the one you describe.
I mean, do people downvote comments that they would have otherwise not voted on if they think the comment has too many upvotes? If not, why do they decline to upvote when they otherwise would have upvoted?
I often (although) not always will upvote a comment simply if it deserves it. I only very rarely downvote or don’t vote a comment if I think it is too high but should be positive. Declining to upvote a too high comment is something I do much more frequently than downvoting a too high comment. This is a passive rather than active decision. In general declining to upvote creates less negative emotional feelings in me than actively downvoting something which is too high.
I do sometimes upvote comments that have been downvoted if I think they’ve simply been downvoted way too much. That seems for me at least to be the most common form of corrective voting.
I have no idea how representative my behavior is of the general LWian.
If I think a comment should end up with a score of +2, too bad, I have no direct way of controlling that. The resulting score is a reflection of the community’s votes, not something I try to game by altering my voting decision based on whether the score gets closer to +2.
Ok, but that’s your self handicapping and I want no part of it myself.
My decision to vote shall be determined by whatever vote I predict has the best consequences.
I don’t think “should have a score of +2” should enter into the decision to upvote, downvote, or not vote.
Why not? No, really: what’s wrong with that?
Instead, I’d rather voting algorithms which, when implemented individually, have results which can be meaningfully summed.
The current voting algorithms can be meaningfully summed, they’re just complicated, opaque and nonstandardized. I don’t understand why you think “everyone should use my voting algorithm” is a useful thing to say.
If I think a comment should end up with a score of +2, too bad, I have no direct way of controlling that.
In what situation would you not, given that it is possible to alter your voting decision based on whether the score gets closer to +2? Do you intend to prevent that somehow?
do people downvote comments that they would have otherwise not voted on if they think the comment has too many upvotes?
At least two people do. Why do you ask? (Seriously, I can’t figure out why this is phrased as a rhetorical question.)
Edit: Okay, here’s the thing: I think it would be more useful if karma was the average of our valuations; i.e. if you could, say, input ‘+10’ or ‘-3’ as shorthand for ‘upvote if below this number, downvote if above’ rather than simply ‘upvote’ and ‘downvote’. What do you imagine the problem with this system would be?
Edit: Okay, here’s the thing: I think it would be more useful if karma was the average of our valuations; i.e. if you could, say, input ‘+10’ or ‘-3’ as shorthand for ‘upvote if below this number, downvote if above’ rather than simply ‘upvote’ and ‘downvote’. What do you imagine the problem with this system would be?
Not exactly a problem but a lotof my votes would either be +1000 or −1000.
It seems to me that your suggested policy would result in comment-placement effects being even stronger than they are now. What score should a comment end up with if 50 people consider voting on it and they all think it should have a score of +2?
I communicated poorly. I don’t think “should have a score of +2” should enter into the decision to upvote, downvote, or not vote. Instead, I’d rather voting algorithms which, when implemented individually, have results which can be meaningfully summed. For example, suppose everyone upvotes exactly when they think a comment is in the top 5% of comments in “everyone should read this” ordering and downvotes for the bottom 5%. Then the sum reflects the number of people who read the comment x (the average percentage of people who thought it was in the top 5% - bottom 5%). That’s something I can understand.
If I think a comment should end up with a score of +2, too bad, I have no direct way of controlling that. The resulting score is a reflection of the community’s votes, not something I try to game by altering my voting decision based on whether the score gets closer to +2.
I mean, do people downvote comments that they would have otherwise not voted on if they think the comment has too many upvotes? If not, why do they decline to upvote when they otherwise would have upvoted? The two look the same from everyone else’s perspective, right?
I’m not saying that your proposed algorithm is wrong—not exactly, anyway. I am pointing out something that I think is a flaw.
Putting the same point a different way:
Consider two comments. One is posted early, and is seen by 50 people. It’s slightly good—good enough that each of those people would, by your algorithm, upvote it, but no better than that. The other is posted late, and is only seen by 10 people, but it’s very, very good. According to your algorithm, the first one would get a score of +50 and the second one would get a score of +10. By the methods currently in use, the first one will get a low score—probably +1 or +2 - and the second one will still get +10.
The first comment got many more points than the second, by your algorithm, because its author was able to quickly put together something good enough to be upvoteable, and because they were at the right place at the right time to post it early in the conversation, which implies either luck or lots of time spent lurking on LW. I don’t think these are things we want to incentivise—at least not more than we want to incentivise putting time into crafting well-thought-out comments.
Also:
I do this. Not very often, but it happens.
You’re right. Reviewing my feelings on this I discovered that my main “ugh, that’s terrible” feeling comes from the observation that a correlated set of people form a control system that wipes out the contributions of others not in a similar or larger implicit alliance. That doesn’t imply the solution is to vote independently of the total, though, as there are negative side effects like the one you describe.
I often (although) not always will upvote a comment simply if it deserves it. I only very rarely downvote or don’t vote a comment if I think it is too high but should be positive. Declining to upvote a too high comment is something I do much more frequently than downvoting a too high comment. This is a passive rather than active decision. In general declining to upvote creates less negative emotional feelings in me than actively downvoting something which is too high.
I do sometimes upvote comments that have been downvoted if I think they’ve simply been downvoted way too much. That seems for me at least to be the most common form of corrective voting.
I have no idea how representative my behavior is of the general LWian.
Ok, but that’s your self handicapping and I want no part of it myself.
My decision to vote shall be determined by whatever vote I predict has the best consequences.
Surely by whatever vote is recommended by the decision procedure you predict has the best consequences. ;)
No, I meant what I said.
Why not? No, really: what’s wrong with that?
The current voting algorithms can be meaningfully summed, they’re just complicated, opaque and nonstandardized. I don’t understand why you think “everyone should use my voting algorithm” is a useful thing to say.
In what situation would you not, given that it is possible to alter your voting decision based on whether the score gets closer to +2? Do you intend to prevent that somehow?
At least two people do. Why do you ask? (Seriously, I can’t figure out why this is phrased as a rhetorical question.)
Edit: Okay, here’s the thing: I think it would be more useful if karma was the average of our valuations; i.e. if you could, say, input ‘+10’ or ‘-3’ as shorthand for ‘upvote if below this number, downvote if above’ rather than simply ‘upvote’ and ‘downvote’. What do you imagine the problem with this system would be?
Not exactly a problem but a lotof my votes would either be +1000 or −1000.