I didn’t think of that, but do you think karma shouldn’t depend on the order in which votes are made? Shouldn’t a person who gets 20 downvotes followed by 20 upvotes have higher karma at the end than a person who had 20 upvotes followed by 20 downvotes? The first indicates improvement; the second indicates getting less interesting over time.
I am confused by how you’re doing the computation, though. If half of X’s votes on Y are positive and half are negative, I would expect to compute X’s total contribution to Y as zero. I wouldn’t keep a running sum of X’s contribution to Y’s karma on each thing Y has said. We can also go back and recompute the contribution to previous comments as X makes more comments. But I’d probably rather have an adaptive algorithm so that the score on individual comments reflects the situation at the moment the rating was made.
Even if we did it that way, though, this sensitivity is not a real problem. Nearly every adaptive algorithm or learning algorithm has that kind of sensitivity. It never matters in practice when there’s enough data. Text compression algorithms don’t have drastically different compression ratios if you swap text input blocks around.
I didn’t think of that, but do you think karma shouldn’t depend on the order in which votes are made? Shouldn’t a person who gets 20 downvotes followed by 20 upvotes have higher karma at the end than a person who had 20 upvotes followed by 20 downvotes? The first indicates improvement; the second indicates getting less interesting over time.
I am confused by how you’re doing the computation, though. If half of X’s votes on Y are positive and half are negative, I would expect to compute X’s total contribution to Y as zero. I wouldn’t keep a running sum of X’s contribution to Y’s karma on each thing Y has said. We can also go back and recompute the contribution to previous comments as X makes more comments. But I’d probably rather have an adaptive algorithm so that the score on individual comments reflects the situation at the moment the rating was made.
Even if we did it that way, though, this sensitivity is not a real problem. Nearly every adaptive algorithm or learning algorithm has that kind of sensitivity. It never matters in practice when there’s enough data. Text compression algorithms don’t have drastically different compression ratios if you swap text input blocks around.