Personally, the reputation system mainly taught me how to play, but not for what reasons, other than maximizing my karma score. It works like a dog-collar, administering electric shocks when the dog approaches a certain barrier. The dog learns where it can go, on grounds of pain.
Humans can often only infer little detail from the change of a number, the little they learn mostly being misinterpreted. People complaining about downvotes are a clear indication for this being the case.
If people write, “I know I’ll be downvoted for this, but...”, what they mean is, that they learnt, that what they are going to write will be punished, but that they do not know why and are more than superficially interested to learn how they are wrong.
Has it been shown that reputation systems cultivate discourse and teach novel insights rather than turning communities into echo chambers and their members into karma score maximizer’s?
If it was my sole intention, I could probably accumulate a lot of karma. Only because I often ignore what I learnt about the reputation system, and write what interests me, I manage to put forth some skepticism. But can a community, that is interested in truth and the refinement of rationality, rely on people to ignore the social pressure and strong incentive being applied by a reputation system, in favor of honesty and diversity?
How much of what is written on Less Wrong, and how it is written, is an effect of the reputation system? How much is left unsaid?
I do not doubt that reputation systems can work, in principle. If everyone involved was perfectly rational, with a clear goal in mind, a reputation system could provide valuable feedback. But once you introduce human nature, it might become practically unfeasible, or have adverse side-effects.
I am going to stick with downvoting them regardless.
What’s so bad about writing that you know that you’ll be downvoted? Many of your comments on the recent meta-ethics threads have been downvoted (at least initially, haven’t checked again). So you know that another comment that criticizes the moral theory of someone else is likely to be downvoted as well (I think you even wrote something along those lines).
Saying that you are aware that what you are going to say will be downvoted provides valuable feedback.
That you know that you are going to be downvoted doesn’t mean that you know that you are wrong and decided to voice your wrongness again.
What’s so bad about writing that you know that you’ll be downvoted?
I find “I know I’ll be downvoted” or “I know I’ll be flamed” to be tiresome, even though I don’t downvote them.
I’d rather be left to form my own opinion relatively freshly.
Also, (and I’m not saying this applied to wedrifid), I frequently find that IKIB* is attached to something which is either innocuous or that ends up being liked.
Has this topic being discussed in detail?
Personally, the reputation system mainly taught me how to play, but not for what reasons, other than maximizing my karma score. It works like a dog-collar, administering electric shocks when the dog approaches a certain barrier. The dog learns where it can go, on grounds of pain.
Humans can often only infer little detail from the change of a number, the little they learn mostly being misinterpreted. People complaining about downvotes are a clear indication for this being the case.
If people write, “I know I’ll be downvoted for this, but...”, what they mean is, that they learnt, that what they are going to write will be punished, but that they do not know why and are more than superficially interested to learn how they are wrong.
Has it been shown that reputation systems cultivate discourse and teach novel insights rather than turning communities into echo chambers and their members into karma score maximizer’s?
If it was my sole intention, I could probably accumulate a lot of karma. Only because I often ignore what I learnt about the reputation system, and write what interests me, I manage to put forth some skepticism. But can a community, that is interested in truth and the refinement of rationality, rely on people to ignore the social pressure and strong incentive being applied by a reputation system, in favor of honesty and diversity?
How much of what is written on Less Wrong, and how it is written, is an effect of the reputation system? How much is left unsaid?
I do not doubt that reputation systems can work, in principle. If everyone involved was perfectly rational, with a clear goal in mind, a reputation system could provide valuable feedback. But once you introduce human nature, it might become practically unfeasible, or have adverse side-effects.
Perhaps we should have a social norm of asking anyone who says “I know I’ll be downvoted for this” why they think so.
I am going to stick with downvoting them regardless.
What’s so bad about writing that you know that you’ll be downvoted? Many of your comments on the recent meta-ethics threads have been downvoted (at least initially, haven’t checked again). So you know that another comment that criticizes the moral theory of someone else is likely to be downvoted as well (I think you even wrote something along those lines).
Saying that you are aware that what you are going to say will be downvoted provides valuable feedback.
That you know that you are going to be downvoted doesn’t mean that you know that you are wrong and decided to voice your wrongness again.
Mild spaminess, unhealthy passive aggressive habit, unnecessary insult to the reader.
I find “I know I’ll be downvoted” or “I know I’ll be flamed” to be tiresome, even though I don’t downvote them.
I’d rather be left to form my own opinion relatively freshly.
Also, (and I’m not saying this applied to wedrifid), I frequently find that IKIB* is attached to something which is either innocuous or that ends up being liked.
I will also continue to downvote them, but I’m more likely to explain why.
And, of course, being downvoted doesn’t necessarily /mean/ that you’re wrong.
I hadn’t thought about that policy, and I wouldn’t presume to ask you to change it.
Why thank you. I’ve also made an exception to my general policy of downvoting all ‘should’ claims for norms that don’t have my complete support. :)