I’d like to attest that I find the karma system (by which I understand not just the software but the way the community uses it) a huge blessing and part of LW’s appeal to me. It is a strong incentive to pause and ask myself if I even have something to say before I open my mouth around here (which is why I haven’t written a main blog post yet) rather than just fling crap at the wall like one does in the rest of the Internet.
The “downvotes vs replies” problem is, I think, for the most part a non-issue. Anyone who’s been here a bit will know that if (generic) you ask for clarification of your downvotes, people will generally provide as long as you’re not acting whiney or sore about it. And there will be nothing stopping you from constructively engaging them on the points raised (though beware to actually apply reading comprehension to what is said then, because people don’t like it when you fail to update).
I’d like to attest that I find the karma system (by which I understand not just the software but the way the community uses it) a huge blessing and part of LW’s appeal to me. It is a strong incentive to pause and ask myself if I even have something to say before I open my mouth around here...
Yes, I also see that a reputation system does have positive effects given certain circumstances. But would you want to have such a system employed on a global basis, where millions could downvote you for saying that there is no God? Obviously such a system would be really bad for the kind of people who read lesswrong and for the world as a whole.
That means that the use of the system on lesswrong is based on the assumption that it will only be used by people who are much like you and will therefore work well for you. But given that lesswrong is an open system, will it always stay that way? At what point is it going to fail on you, how will you notice, how do you set the threshold?
And given that the system works so well as to keep everyone who doesn’t think like you off lesswrong, how are you going to notice negative effects of groupthink? Do we trust our abilities to seek truth enough to notice when the system starts to discourage people who are actually less wrong than lesswrong?
That means that the use of the system on lesswrong is based on the assumption that it will only be used by people who are much like you and will therefore work well for you. But given that lesswrong is an open system, will it always stay that way?
Well, nothing lasts forever, supposedly. If in future Less Wrong’s quality gets diluted away, it won’t matter to me if it keeps using the vote system or something else because I won’t care to be on Less Wrong any more.
However, part of the function of the vote system is selection. To put it brutally, it drives away incompatible people (and signals positively to compatible ones). So I think LW will stay worthwhile for quite a while.
And yes, in a way this is one of your negatives from your other post which I actually think is a positive. If someone gets consistently downvoted, doesn’t get why, AND can’t ask and find out and update on that, then with some probability we can say we don’t want them here. I’m sure we lose some good people this way too, but the system’s better than nothing; at least what gets through the filter is much better than things would be without it.
I’d like to attest that I find the karma system (by which I understand not just the software but the way the community uses it) a huge blessing and part of LW’s appeal to me. It is a strong incentive to pause and ask myself if I even have something to say before I open my mouth around here (which is why I haven’t written a main blog post yet) rather than just fling crap at the wall like one does in the rest of the Internet.
The “downvotes vs replies” problem is, I think, for the most part a non-issue. Anyone who’s been here a bit will know that if (generic) you ask for clarification of your downvotes, people will generally provide as long as you’re not acting whiney or sore about it. And there will be nothing stopping you from constructively engaging them on the points raised (though beware to actually apply reading comprehension to what is said then, because people don’t like it when you fail to update).
Yes, I also see that a reputation system does have positive effects given certain circumstances. But would you want to have such a system employed on a global basis, where millions could downvote you for saying that there is no God? Obviously such a system would be really bad for the kind of people who read lesswrong and for the world as a whole.
That means that the use of the system on lesswrong is based on the assumption that it will only be used by people who are much like you and will therefore work well for you. But given that lesswrong is an open system, will it always stay that way? At what point is it going to fail on you, how will you notice, how do you set the threshold?
And given that the system works so well as to keep everyone who doesn’t think like you off lesswrong, how are you going to notice negative effects of groupthink? Do we trust our abilities to seek truth enough to notice when the system starts to discourage people who are actually less wrong than lesswrong?
Well, nothing lasts forever, supposedly. If in future Less Wrong’s quality gets diluted away, it won’t matter to me if it keeps using the vote system or something else because I won’t care to be on Less Wrong any more.
However, part of the function of the vote system is selection. To put it brutally, it drives away incompatible people (and signals positively to compatible ones). So I think LW will stay worthwhile for quite a while.
And yes, in a way this is one of your negatives from your other post which I actually think is a positive. If someone gets consistently downvoted, doesn’t get why, AND can’t ask and find out and update on that, then with some probability we can say we don’t want them here. I’m sure we lose some good people this way too, but the system’s better than nothing; at least what gets through the filter is much better than things would be without it.