Short version of how this is different, for those too lazy to click on the link: if you sort by “top”, comments get sorted in a simple “the ones with the highest score go on top” order. This has the problem that it favors comments that were posted early on, since they’re the ones that people see first and they’ve had a lot of time to gather upvotes. A good comment that’s posted late might get stuck near the bottom because few people ever scroll all the way down to upvote it.
“Best” uses some statistical magic to fix that:
If everyone got a chance to see a comment and vote on it, it would get some proportion of upvotes to downvotes. This algorithm treats the vote count as a statistical sampling of a hypothetical full vote by everyone, much as in an opinion poll. It uses this to calculate the 95% confidence score for the comment. That is, it gives the comment a provisional ranking that it is 95% sure it will get to. The more votes, the closer the 95% confidence score gets to the actual score.
If a comment has one upvote and zero downvotes, it has a 100% upvote rate, but since there’s not very much data, the system will keep it near the bottom. But if it has 10 upvotes and only 1 downvote, the system might have enough confidence to place it above something with 40 upvotes and 20 downvotes—figuring that by the time it’s also gotten 40 upvotes, it’s almost certain it will have fewer than 20 downvotes. And the best part is that if it’s wrong (which it is 5% of the time), it will quickly get more data, since the comment with less data is near the top—and when it gets that data, it will quickly correct the comment’s position. The bottom line is that this system means good comments will jump quickly to the top and stay there, and bad comments will hover near the bottom. (Picky readers might observe that some comments probably get a higher rate of votes, up or down, than others, which this system doesn’t explicitly model. However, any bias which that introduces is tiny in comparison to the time bias which the system removes, and comments which get fewer overall votes will stay a bit lower anyway due to lower confidence.)
Not sure I fully understood that either. But they say it works well, so I guess I’ll trust them!
I’m curious whether the math still works correctly on a site where the default karma is 1 instead of 0. But since it’s magic to start with, I guess “meh”. Let’s just not use it to calculate CEV or anything. ;-)
I think what they’re doing is doing statistical inference for the fraction upvotes/total_votes. I’m not sure this is the best model, possible but it seems to have worked well enough.
I suspect they’re taking the mean of the 95% confidence interval, but I’m not sure. There’s actually a pretty natural way to do this more rigorously in a Bayesian framework, called hierarchical modeling (similar to this), but it can be complex to fit such a model.
Edit: However, a simpler Bayesian approach would just be to do inference for a proportion using a ‘reasonable’ prior for the proportion (which approximates the actual distribution of proportions) expressed as a Beta distribution (this makes the math easy). Come to think of it, this would actually be pretty easy to implement. You could even fit a full hierarchical model using a data set and then use the prior for the proportion you get from that in your algorithm. The advantage to this is that you can do the full hierarchical model offline in R and avoid having to do expensive tasks repeatedly and having to code up the fitting code. The rest of the math is very simple. This idea is simple enough that I bet someone else has done it.
If you use the Bayes approach with a Beta(x,y) prior, all you do is for each post add x to the # of upvotes, add y to the # of downvotes, and then compute the % of votes which are upvotes. [1]
In my college AI class we used this exact method with x=y=1 to adjust for low sample size. Someone should switch out the clunky frequentist method reddit apparently uses with this Bayesian method!
Short version of how this is different, for those too lazy to click on the link: if you sort by “top”, comments get sorted in a simple “the ones with the highest score go on top” order. This has the problem that it favors comments that were posted early on, since they’re the ones that people see first and they’ve had a lot of time to gather upvotes. A good comment that’s posted late might get stuck near the bottom because few people ever scroll all the way down to upvote it.
“Best” uses some statistical magic to fix that:
Not sure I fully understood that either. But they say it works well, so I guess I’ll trust them!
I’m curious whether the math still works correctly on a site where the default karma is 1 instead of 0. But since it’s magic to start with, I guess “meh”. Let’s just not use it to calculate CEV or anything. ;-)
I think what they’re doing is doing statistical inference for the fraction upvotes/total_votes. I’m not sure this is the best model, possible but it seems to have worked well enough.
I suspect they’re taking the mean of the 95% confidence interval, but I’m not sure. There’s actually a pretty natural way to do this more rigorously in a Bayesian framework, called hierarchical modeling (similar to this), but it can be complex to fit such a model.
Edit: However, a simpler Bayesian approach would just be to do inference for a proportion using a ‘reasonable’ prior for the proportion (which approximates the actual distribution of proportions) expressed as a Beta distribution (this makes the math easy). Come to think of it, this would actually be pretty easy to implement. You could even fit a full hierarchical model using a data set and then use the prior for the proportion you get from that in your algorithm. The advantage to this is that you can do the full hierarchical model offline in R and avoid having to do expensive tasks repeatedly and having to code up the fitting code. The rest of the math is very simple. This idea is simple enough that I bet someone else has done it.
If you use the Bayes approach with a Beta(x,y) prior, all you do is for each post add x to the # of upvotes, add y to the # of downvotes, and then compute the % of votes which are upvotes. [1]
In my college AI class we used this exact method with x=y=1 to adjust for low sample size. Someone should switch out the clunky frequentist method reddit apparently uses with this Bayesian method!
[1] This seems to be what it says in the pdf.