I don’t think that would work, because the reason that easy content rises faster is not because the people voting are unable to judge quality.
The upvote grading system is pass / fail...it inherently favors content which is just barely good enough to earn the upvote, and is otherwise processed as easily, quickly, and uncontroversially as possible.
Under my model of why easy content rises, Eliezer_Yudkowsky-votes would be just as susceptible to the effect as any newbie LW user’s votes...that is, unless high profile users exerted a conscious effort to actively resist upvoting content which is good yet not substantial.
What’s worse, you could become a high karma user simply by posting “easy content”. That’s what happens on Reddit.
On Lesswrong, the readers have a distaste for mindless content, so it doesn’t proliferate, but all this means is that the “passing” threshold is higher. So you might (just as an example) still end up with content which echoes things that everyone already agrees with—that’s not obviously unsubstantial in a way that would trigger down-votes but it is still not particularly valuable while still being easily processed and agreeable.
(Note: In pointing out the shortcomings of the voting system, it should be noted that I haven’t actually suggested a superior method. Short of peer review, I’m guessing a more nuanced voting system which goes beyond the binary ⇵ would be helpful.)
Maybe people should be able to give two kinds of karma. One is for “pretty good” and the other (perhaps with a limit to how many can be given in a month) would be for “really excellent”.
Some link aggregates have “reaction buttons”—OMG, Epic, LOL, Fail, WTF, and stuff like that.
I think it would be cool if someone would make a forum with separate feedback for..
I personally benefited or learned from this … TIL
I denotatively think this is factually correct / incorrect … green checkmark vs. red X
connotative yay / boo… :) vs :(
visibility vote - as in, I want this to be more / less visible to others… open eye vs. closed eye
Under this system, the visibility vote would play the function of the primary up-down vote and the “TIL” would be a separate marker of quality, which people could sort by if they wanted. The purpose of the other two feedback forms are primarily to provide an outlet for signalling agree/disagree without adversely influencing visibility—you might want to make a distasteful opinion, common misconception, or well formed argument for an opposing viewpoint visible without creating the impression that it’s supported and correct. You might want to make applause or jokes less visible without signaling displeasure or disagreement.
Realistically though, a lot of users have talked about deep changes to LW’s website functioning in the past, and it doesn’t seem like anyone’s up for actually doing it. It is kind of a hard job to do.
On Lesswrong, the readers have a distaste for mindless content, so it doesn’t proliferate, but all this means is that the “passing” threshold is higher. So you might (just as an example) still end up with content which echoes things that everyone already agrees with—that’s not obviously unsubstantial in a way that would trigger down-votes but it is still not particularly valuable while still being easily processed and agreeable.
At some point, shouldn’t content like the latter be identified as either applause lights or guessing the teacher’s password? And, theoretically, be documented better in the wiki than the original posts? To me it seems like migrating excellent content to the wiki would be a good way to prevent follow-up articles unless they address a specific portion of the wiki, in which case it can just be edited in with discussion. I haven’t spent any time on the wiki, though, which suggests that either I am doing it wrong or that the wiki is not as high-quality as the posts yet.
(Note: In pointing out the shortcomings of the voting system, it should be noted that I haven’t actually suggested a superior method. Short of peer review, I’m guessing a more nuanced voting system which goes beyond the binary ⇵ would be helpful.)
If I imagine a perfect rating oracle it would give ratings that ended up maximizing global utility. If it only had the existing karma to work with, it would have to balance karma as an incentive to readers and an incentive to authors so that the right posts would appear and be read by the right people to encourage further posts that increased global utility. It could do that with the existing integral karma ratings, but at the very least it seems like separate ratings for authors and content would be appropriate to direct readers to the best posts and also give authors incentive to write the best new posts. This suggests both separate karma awards for content and authorship as well as karmafied tags, for lack of a better word, that direct authors in the direction of their strengths and readers in the direction of their need. For example, a post might be karma-tagged “reader!new-rationalist 20”, “author!new-rationalist 5″ and “author!bayesian-statistics 50” for a good beginning article for aspiring rationalists written by an author who really should focus on more detailed statistics, given their skill in the subject as evidenced by the post.
At some point, shouldn’t content like the latter be identified as either applause lights or guessing the teacher’s password? And, theoretically, be documented better in the wiki than the original posts?
Maybe, but the idea is to make the best content the most visible. If applause gains higher visibility than content, the system has failed even if the users are able to identify which is which after having seen them both.
I haven’t spent any time on the wiki, though, which suggests that either I am doing it wrong or that the wiki is not as high-quality as the posts yet.
I think people use the forums more instead of the wiki because the forums are more social and interactive. (I see this is a real benefit of forums over wikis, not as a bug that people have)
I don’t think that would work, because the reason that easy content rises faster is not because the people voting are unable to judge quality.
The upvote grading system is pass / fail...it inherently favors content which is just barely good enough to earn the upvote, and is otherwise processed as easily, quickly, and uncontroversially as possible.
Under my model of why easy content rises, Eliezer_Yudkowsky-votes would be just as susceptible to the effect as any newbie LW user’s votes...that is, unless high profile users exerted a conscious effort to actively resist upvoting content which is good yet not substantial.
What’s worse, you could become a high karma user simply by posting “easy content”. That’s what happens on Reddit.
On Lesswrong, the readers have a distaste for mindless content, so it doesn’t proliferate, but all this means is that the “passing” threshold is higher. So you might (just as an example) still end up with content which echoes things that everyone already agrees with—that’s not obviously unsubstantial in a way that would trigger down-votes but it is still not particularly valuable while still being easily processed and agreeable.
(Note: In pointing out the shortcomings of the voting system, it should be noted that I haven’t actually suggested a superior method. Short of peer review, I’m guessing a more nuanced voting system which goes beyond the binary ⇵ would be helpful.)
Maybe people should be able to give two kinds of karma. One is for “pretty good” and the other (perhaps with a limit to how many can be given in a month) would be for “really excellent”.
Some link aggregates have “reaction buttons”—OMG, Epic, LOL, Fail, WTF, and stuff like that.
I think it would be cool if someone would make a forum with separate feedback for..
I personally benefited or learned from this … TIL
I denotatively think this is factually correct / incorrect … green checkmark vs. red X
connotative yay / boo… :) vs :(
visibility vote - as in, I want this to be more / less visible to others… open eye vs. closed eye
Under this system, the visibility vote would play the function of the primary up-down vote and the “TIL” would be a separate marker of quality, which people could sort by if they wanted. The purpose of the other two feedback forms are primarily to provide an outlet for signalling agree/disagree without adversely influencing visibility—you might want to make a distasteful opinion, common misconception, or well formed argument for an opposing viewpoint visible without creating the impression that it’s supported and correct. You might want to make applause or jokes less visible without signaling displeasure or disagreement.
Realistically though, a lot of users have talked about deep changes to LW’s website functioning in the past, and it doesn’t seem like anyone’s up for actually doing it. It is kind of a hard job to do.
Alternately, distinguish upvotes (“This is a good contribution”) from promotion votes (“This is a significant contribution on an LW core topic”).
The UI starts to get really unwieldy if you do that.
At some point, shouldn’t content like the latter be identified as either applause lights or guessing the teacher’s password? And, theoretically, be documented better in the wiki than the original posts? To me it seems like migrating excellent content to the wiki would be a good way to prevent follow-up articles unless they address a specific portion of the wiki, in which case it can just be edited in with discussion. I haven’t spent any time on the wiki, though, which suggests that either I am doing it wrong or that the wiki is not as high-quality as the posts yet.
If I imagine a perfect rating oracle it would give ratings that ended up maximizing global utility. If it only had the existing karma to work with, it would have to balance karma as an incentive to readers and an incentive to authors so that the right posts would appear and be read by the right people to encourage further posts that increased global utility. It could do that with the existing integral karma ratings, but at the very least it seems like separate ratings for authors and content would be appropriate to direct readers to the best posts and also give authors incentive to write the best new posts. This suggests both separate karma awards for content and authorship as well as karmafied tags, for lack of a better word, that direct authors in the direction of their strengths and readers in the direction of their need. For example, a post might be karma-tagged “reader!new-rationalist 20”, “author!new-rationalist 5″ and “author!bayesian-statistics 50” for a good beginning article for aspiring rationalists written by an author who really should focus on more detailed statistics, given their skill in the subject as evidenced by the post.
Maybe, but the idea is to make the best content the most visible. If applause gains higher visibility than content, the system has failed even if the users are able to identify which is which after having seen them both.
I think people use the forums more instead of the wiki because the forums are more social and interactive. (I see this is a real benefit of forums over wikis, not as a bug that people have)