For starters, a system to be sure that a user or service is the same user or service it was previously.
That seems to be pretty trivial. What’s wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?
You don’t need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.
I’m more interested in if anyone’s trying to solve it.
Well, again, the critical question is: What are you really trying to achieve?
If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it’s still a two-argument function.
there’s no attempts to run multi-dimensional reputation systems, to weigh votes by length of post or age of poster, spellcheck or capitalizations thresholds.
Once again, with feeling :-D—to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there’s no guarantee (and, actually, a pretty solid bet) that it won’t suit other people well.
I expect Twitter or FaceBook have something complex underneath the hood
Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
That seems to be pretty trivial. What’s wrong with a username/password combo (besides all the usual things)
“All the usual things” are many, and some of them are quite wrong indeed.
If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It’s better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.
On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people’s — maybe you want tripcodes.
And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.
What’s wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?
In addition to the usual problems, which are pretty serious to start with, you’re relying on the client. To borrow from information security, the client is in the hands of the enemy. Sockpuppet (sybil in trust networks) attacks, where entity pretends to be many different users (aka sockpuppets), and impersonation attacks, where a user pretends to be someone they are not, are both well-documented and exceptionally common. Every forum package I can find relies on social taboos or simply ignoring the problem, followed by direct human administrator intervention, and most don’t even make administrator intervention easy.
There are also very few sites that have integrated support for private-key-like technologies, and most forum packages are not readily compatible with even all password managers.
This isn’t a problem that can be perfectly solved, true. But right now it’s not even got bandaids.
Once again, with feeling :-D—to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different.
“Normal” social reputation runs into pretty significant issues as soon as your group size exceeds even fairly small groups—I can imagine folk who could handle a couple thousand names, but it’s common for a site to have orders of magnitude more users. These systems can provide useful tools for noticing and handling matters that are much more evident in pure data than in “expert judgments”. But these are relatively minor benefits.
At a deeper level, a well-formed reputation system should encourage ‘good’ posting (posting that matches the expressed desires of the forum community) and discourage ‘bad’ posts (posting that goes against the expressed desires of the forum community), as well as reduce incentives toward me-too or this-is-wrong-stop responses.
This isn’t without trade-offs : you’ll implicitly make the forum’s culture drift more slowly, and encourage surviving dissenters to be contrarians for whom the reputation system doesn’t matter. But the existing reputation systems don’t let you make that trade-off, and instead you have to decide whether to use a far more naive system that is very vulnerable to attack.
You can build an automated system to suit your fancy, but there’s no guarantee (and, actually, a pretty solid bet) that it won’t suit other people well.
To some extent—spell-check and capitalization expectations for a writing community will be different than that of a video game or chemistry forum, help forums will expect shorter-lifespan users than the median community—but a sizable number of these aspects are common to nearly all communities.
Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
They have incentives toward keeping users. “Bad” posters are tautologically a disincentive for most users (exceptions: some folk do show revealed preferences for hearing from terrible people).
Yes, of course, but if we start to talk in these terms, the first in line is the standard question: What is your threat model?
I also don’t think there’s a good solution to sockpuppetry short of mandatory biometrics.
But the existing reputation systems don’t let you make that trade-off
Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that’s not used anywhere and reputation determining what, how, and when can you post.
very vulnerable to attack
Attack? Again, threat model, please.
“Bad” posters are tautologically a disincentive for most users
Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
An attacker creates a large number of nodes and overwhelms any signal in the initial system.
For the specific example of a reddit-based forum, it’s trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.
I also don’t think there’s a good solution to sockpuppetry short of mandatory biometrics.
10% of the problem is hard. That does not explain the small amount of work done on the other 90%. The vast majority of sockpuppets aren’t that complicated: most don’t use VPNs or anonymizers, most don’t use large stylistic variation, and many even use the same browser from one persona to the next. It’s also common for a sockpuppets to have certain network attributes in common with their original persona. Full authorship analysis has both structural (primarily training bias) and pragmatic (CPU time) limitations that would make it unfeasible for large forums...
But there are a number of fairly simple steps to fight sockpuppets that computers handle better than humans, and yet still require often-unpleasant manual work to check.
Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that’s not used anywhere and reputation determining what, how, and when can you post.
Yes, but there aren’t open-source systems that exist and have documentation which do these things beyond the most basic level. At most, there are simple reputation systems where a small amount has an impact on site functionality, such as this site. But Reddit’s codebase does not allow upvotes to be limited or weighed based on the age of account, does not have , and would require pretty significant work to change any of these attributes. (The main site at least acts against some of the more overt mass-downvoting by acting against downvotes applied to the profile page, but this doesn’t seem present here?)
Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
If a large enough percentage of outside user content is “bad”, users begin to treat that space as advertising and ignore it. Many forums also don’t make it easy to block users (see : here), and almost none handle blocking even the most overt of sockpuppets well.
An attacker creates a large number of nodes and overwhelms any signal in the initial system.
For the specific example of a reddit-based forum, it’s trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.
You seem to want to build a massive sledgehammer-wielding mech to solve the problem of fruit flies on a banana.
So the attacker expends a not inconsiderable amount of effort to build his sockpuppet army and achieves sky-high karma on a forum. And..? It’s not like you can sell karma or even gain respect for your posts from other than newbies. What would be the point?
Not to mention that there is a lot of empirical evidence out there—formal reputation systems on forums go back at least as far as early Slashdot and y’know? they kinda work. They don’t achieve anything spectacular, but they also tend not have massive failure modes. Once the sockpuppet general gains the attention of an admin or at least a moderator, his army is useless.
You want to write a library which will attempt to identify sockpuppets through some kind of multifactor analysis? Sure, that would be a nice thing to have—as long as it’s reasonable about things. One of the problems with automated defense mechanisms is that they can be often used as DOS tools if the admin is not careful.
If a large enough percentage of outside user content is “bad”
That still actually is the case for Twitter and FB.
That seems to be pretty trivial. What’s wrong with a username/password combo (besides all the usual things) or, if you want to get a bit more sophisticated, with having the user generate a private key for himself?
You don’t need a web of trust or any central authority to verify that the user named X is in possession of a private key which the user named X had before.
Well, again, the critical question is: What are you really trying to achieve?
If you want the online equivalent of the meatspace reputation, well, first meatspace reputation does not exist as one convenient number, and second it’s still a two-argument function.
Once again, with feeling :-D—to which purpose? Generally speaking, if you run a forum all you need is a way to filter out idiots and trolls. Your regular users will figure out reputation on their own and their conclusions will be all different. You can build an automated system to suit your fancy, but there’s no guarantee (and, actually, a pretty solid bet) that it won’t suit other people well.
Why would Twitter or FB bother assigning reputation to users? They want to filter out bad actors and maximize their eyeballs and their revenue which generally means keeping users sufficiently happy and well-measured.
“All the usual things” are many, and some of them are quite wrong indeed.
If you need solid long-term authentication, outsource it to someone whose business depends on doing it right. Google for instance is really quite good at detecting unauthorized use of an account (i.e. your Gmail getting hacked). It’s better (for a number of reasons) not to be beholden to a single authentication provider, though, which is why there are things like OpenID Connect that let users authenticate using Google, Facebook, or various other sources.
On the other hand, if you need authorization without (much) authentication — for instance, to let anonymous users delete their own posts, but not other people’s — maybe you want tripcodes.
And if you need to detect sock puppets (one person pretending to be several people), you may have an easy time or you may be in hard machine-learning territory. (See the obvious recent thread for more.) Some services — like Wikipedia — seem to attract some really dedicated puppeteers.
In addition to the usual problems, which are pretty serious to start with, you’re relying on the client. To borrow from information security, the client is in the hands of the enemy. Sockpuppet (sybil in trust networks) attacks, where entity pretends to be many different users (aka sockpuppets), and impersonation attacks, where a user pretends to be someone they are not, are both well-documented and exceptionally common. Every forum package I can find relies on social taboos or simply ignoring the problem, followed by direct human administrator intervention, and most don’t even make administrator intervention easy.
There are also very few sites that have integrated support for private-key-like technologies, and most forum packages are not readily compatible with even all password managers.
This isn’t a problem that can be perfectly solved, true. But right now it’s not even got bandaids.
“Normal” social reputation runs into pretty significant issues as soon as your group size exceeds even fairly small groups—I can imagine folk who could handle a couple thousand names, but it’s common for a site to have orders of magnitude more users. These systems can provide useful tools for noticing and handling matters that are much more evident in pure data than in “expert judgments”. But these are relatively minor benefits.
At a deeper level, a well-formed reputation system should encourage ‘good’ posting (posting that matches the expressed desires of the forum community) and discourage ‘bad’ posts (posting that goes against the expressed desires of the forum community), as well as reduce incentives toward me-too or this-is-wrong-stop responses.
This isn’t without trade-offs : you’ll implicitly make the forum’s culture drift more slowly, and encourage surviving dissenters to be contrarians for whom the reputation system doesn’t matter. But the existing reputation systems don’t let you make that trade-off, and instead you have to decide whether to use a far more naive system that is very vulnerable to attack.
To some extent—spell-check and capitalization expectations for a writing community will be different than that of a video game or chemistry forum, help forums will expect shorter-lifespan users than the median community—but a sizable number of these aspects are common to nearly all communities.
They have incentives toward keeping users. “Bad” posters are tautologically a disincentive for most users (exceptions: some folk do show revealed preferences for hearing from terrible people).
Yes, of course, but if we start to talk in these terms, the first in line is the standard question: What is your threat model?
I also don’t think there’s a good solution to sockpuppetry short of mandatory biometrics.
Why not? The trade-off is in the details of how much reputation matters. There is a large space between reputation being just a number that’s not used anywhere and reputation determining what, how, and when can you post.
Attack? Again, threat model, please.
Not if you can trivially easy block/ignore them which is the case for Twitter and FB.
An attacker creates a large number of nodes and overwhelms any signal in the initial system.
For the specific example of a reddit-based forum, it’s trivial for an attacker to make up a sizable proportion of assigned reputation points through the use of sockpuppets. It is only moderately difficult for an attacker to automate the time-consuming portions of this process.
10% of the problem is hard. That does not explain the small amount of work done on the other 90%. The vast majority of sockpuppets aren’t that complicated: most don’t use VPNs or anonymizers, most don’t use large stylistic variation, and many even use the same browser from one persona to the next. It’s also common for a sockpuppets to have certain network attributes in common with their original persona. Full authorship analysis has both structural (primarily training bias) and pragmatic (CPU time) limitations that would make it unfeasible for large forums...
But there are a number of fairly simple steps to fight sockpuppets that computers handle better than humans, and yet still require often-unpleasant manual work to check.
Yes, but there aren’t open-source systems that exist and have documentation which do these things beyond the most basic level. At most, there are simple reputation systems where a small amount has an impact on site functionality, such as this site. But Reddit’s codebase does not allow upvotes to be limited or weighed based on the age of account, does not have , and would require pretty significant work to change any of these attributes. (The main site at least acts against some of the more overt mass-downvoting by acting against downvotes applied to the profile page, but this doesn’t seem present here?)
If a large enough percentage of outside user content is “bad”, users begin to treat that space as advertising and ignore it. Many forums also don’t make it easy to block users (see : here), and almost none handle blocking even the most overt of sockpuppets well.
Limit the ability of low karma users to upvote.
You seem to want to build a massive sledgehammer-wielding mech to solve the problem of fruit flies on a banana.
So the attacker expends a not inconsiderable amount of effort to build his sockpuppet army and achieves sky-high karma on a forum. And..? It’s not like you can sell karma or even gain respect for your posts from other than newbies. What would be the point?
Not to mention that there is a lot of empirical evidence out there—formal reputation systems on forums go back at least as far as early Slashdot and y’know? they kinda work. They don’t achieve anything spectacular, but they also tend not have massive failure modes. Once the sockpuppet general gains the attention of an admin or at least a moderator, his army is useless.
You want to write a library which will attempt to identify sockpuppets through some kind of multifactor analysis? Sure, that would be a nice thing to have—as long as it’s reasonable about things. One of the problems with automated defense mechanisms is that they can be often used as DOS tools if the admin is not careful.
That still actually is the case for Twitter and FB.