I agree with this, but with no limit on upvotes, the hypothetical spammer could just register more accounts to give each other lots of karma, then use that one to vote everything down. But then, that’s a problem for this system too.
For the moment, yes—I don’t see us being a high-enough priority target to be anticipating barbarians with torches and pitchforks. Still, if we go as far as we’re hoping, it seems like a possibility.
The right model would perform some kind of flow analysis, starting from the vote graph. I guess there must be standard techniques in web search (linking as voting) and the like, that oppose spam linking (page rank? I haven’t studied what it is).
You are looking for “attack resistant trust metrics”. Google’s PageRank is one such; Raph Levien’s Advogato algorithm is another; my TrustFlow is a third.
I’ve wondered if it wouldn’t be better to give relatively more weight to votes from people with high karma. I haven’t proposed it because it seems counter-productive to escaping the dangers of an echo-chamber, and because it seems like a conflict of interest coming from someone with high karma.
The rule should be stateless: users vote, and it’s data. Then, there’s an algorithm that computes karma of users and rating of comments, starting from that data. Giving more weight to high-Karma users (adjusted for inadequate voting=significantly different from what others value, and for balance of votes) is similar to electing moderators, and ultimately roots in the founders’ conception of what’s on-topic, what’s valued on this particular forum.
The main driver for the limit was to prevent someone registering one or more accounts and being free to vote everything down.
Well, in that case, we could just pick a threshhold and say anyone above that level gets the keys to the ammo closet.
ETA: Or the garden shed, to stick with the going metaphor.
I agree with this, but with no limit on upvotes, the hypothetical spammer could just register more accounts to give each other lots of karma, then use that one to vote everything down. But then, that’s a problem for this system too.
Are we just chasing ghosts?
For the moment, yes—I don’t see us being a high-enough priority target to be anticipating barbarians with torches and pitchforks. Still, if we go as far as we’re hoping, it seems like a possibility.
The right model would perform some kind of flow analysis, starting from the vote graph. I guess there must be standard techniques in web search (linking as voting) and the like, that oppose spam linking (page rank? I haven’t studied what it is).
You are looking for “attack resistant trust metrics”. Google’s PageRank is one such; Raph Levien’s Advogato algorithm is another; my TrustFlow is a third.
I’ve wondered if it wouldn’t be better to give relatively more weight to votes from people with high karma. I haven’t proposed it because it seems counter-productive to escaping the dangers of an echo-chamber, and because it seems like a conflict of interest coming from someone with high karma.
The rule should be stateless: users vote, and it’s data. Then, there’s an algorithm that computes karma of users and rating of comments, starting from that data. Giving more weight to high-Karma users (adjusted for inadequate voting=significantly different from what others value, and for balance of votes) is similar to electing moderators, and ultimately roots in the founders’ conception of what’s on-topic, what’s valued on this particular forum.
i agree. Have a karma based limit under a certain threshold, then, above that threshold, free reign.