By applying our methodology to four large online news communities for which we have complete article commenting and comment voting data (about 140 million votes on 42 million comments), we discover that community feedback does not appear to drive the behavior of users in a direction that is beneficial to the community, as predicted by the operant conditioning framework. Instead, we find that community feedback is likely to perpetuate undesired behavior. In particular, punished authors actually write worse in subsequent posts, while rewarded authors do not improve significantly.
In a footnote, they discuss what they meant by “write worse”:
One important subtlety here is that the observed quality of a post (i.e., the proportion of up-votes) is not entirely a direct consequence of the actual textual quality of the post, but is also affected by community bias effects. We account for this through experiments specifically designed to disentangle these two factors.
They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!
When comparing the fraction of upvotes received by a user with the fraction of upvotes given by a user, we find a strong linear correlation. This suggests that user behavior is largely “tit-for-tat”.… However, we also note an interesting deviation from the general trend. In particular, very negatively evaluated people actually respond in a positive direction: the proportion of up-votes they give is higher than the proportion of up-votes they receive. On the other hand, users receiving many up-votes appear to be more “critical”, as they evaluate others more negatively.
Incredibly interesting article. Must read.
EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.
The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.
If you eliminate the downvotes, what will replace them to prune the bad content?
Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a “mark as non-constructive” button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.
The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.
I am aware of the concept. What exactly do you mean?
The above study is sufficient evidence for me
It says “This paper investigates how ratings on a piece of content affect its author’s future behavior.” I don’t think LW should be in the business of re-educating its users to become good ’net citizens. I’m more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.
It’s not like the observation that downvoting a troll does not magically convert him into a hobbit is news.
Specifically:
In a footnote, they discuss what they meant by “write worse”:
They measure post quality based on textual evidence by spinning up a mechanical turk on 171 comments and using that data to train a binomial regression model. So cool!
Incredibly interesting article. Must read.
EDIT: Consider myself updated. Therefore, I believe downvotes must be destroyed.
The main function of downvotes in LW is NOT to re-educate the offender. Its main function is to make the content which has been sufficiently downvoted effectively invisible.
If you eliminate the downvotes, what will replace them to prune the bad content?
Well, if this is really the goal, then maybe disentangle downvotes from both post/comment karma and personal karma while leaving the invisibility rules in place? Make it more of a “mark as non-constructive” button that if enough people hit it, the post becomes invisible. If we want to make it more comprehensive, it could be made to weigh these votes against upvotes to make the show/hide decision.
Could be done, though it makes karma even more irrelevant to anything.
Negative externalities.
Something else? The above study is sufficient evidence for me (and hopefully others) to start finding another solution.
I am aware of the concept. What exactly do you mean?
It says “This paper investigates how ratings on a piece of content affect its author’s future behavior.” I don’t think LW should be in the business of re-educating its users to become good ’net citizens. I’m more interested in effective filtering of trolling, stupidity, aggression, drama, dick waving, drive-by character assassination, etc. etc.
It’s not like the observation that downvoting a troll does not magically convert him into a hobbit is news.