The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person.
In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right.
Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs.
With respect to the reputation system employed on lesswrong it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.
And even if the above mentioned problems wouldn’t exist, reputation systems might easily reinforce any groupthink, if only by causing those who disagree to be discouraged and those who agree to be rewarded.
If everyone was perfectly rational a reputation system would be a valueable tool. But lesswrong is open to everyone. Even if most of the voting behavior is currently free of bias and motivated cognition it might not stay that way for very long.
Take for example the voting pattern when it comes to plain English, easily digestible submissions, versus highly technical posts including math. A lot of the latter category receives much less upvotes. The writing of technical posts is actively discouraged by this inevitable effect of a reputation system.
Worst of all, any reputation system protects itself by making those who most benefit from it defend its value.
Well, there are two different aspects in Less Wrong system : the global karma of a person, and the score of a comment.
I agree that the “global karma of a person” is of mitigated use. It does sometimes give me a little kick to be more careful in writing on LW (and I’m probably not the only one), but only slightly, and it does have significant drawbacks.
But the score of one comment has a different purpose : the purpose is that a third party (not the one who posted the comment nor the one who put the upvote/downvote) can easily select comments worth to read and those which are not. In that regard, it works relatively well—not perfectly, but better than nothing. And in that regard, it doesn’t really matter if the OP understands why he is downvoted, and in that regard, explaining why you downvote does more harm than good—it decreases the signal/noise ratio (unless the explanation itself is very interesting, like it points to a fallacy that is not commonly recognized).
The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person.
In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right.
Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs.
With respect to the reputation system employed on lesswrong it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.
And even if the above mentioned problems wouldn’t exist, reputation systems might easily reinforce any groupthink, if only by causing those who disagree to be discouraged and those who agree to be rewarded.
If everyone was perfectly rational a reputation system would be a valueable tool. But lesswrong is open to everyone. Even if most of the voting behavior is currently free of bias and motivated cognition it might not stay that way for very long.
Take for example the voting pattern when it comes to plain English, easily digestible submissions, versus highly technical posts including math. A lot of the latter category receives much less upvotes. The writing of technical posts is actively discouraged by this inevitable effect of a reputation system.
Worst of all, any reputation system protects itself by making those who most benefit from it defend its value.
Well, there are two different aspects in Less Wrong system : the global karma of a person, and the score of a comment.
I agree that the “global karma of a person” is of mitigated use. It does sometimes give me a little kick to be more careful in writing on LW (and I’m probably not the only one), but only slightly, and it does have significant drawbacks.
But the score of one comment has a different purpose : the purpose is that a third party (not the one who posted the comment nor the one who put the upvote/downvote) can easily select comments worth to read and those which are not. In that regard, it works relatively well—not perfectly, but better than nothing. And in that regard, it doesn’t really matter if the OP understands why he is downvoted, and in that regard, explaining why you downvote does more harm than good—it decreases the signal/noise ratio (unless the explanation itself is very interesting, like it points to a fallacy that is not commonly recognized).
Less encouraged.