How would you design a review system that cannot be gamed (very easily)?
For example: Someone sends a message to their 100 friends, and tells them to open the romantic partners app and falsely accuse you of date rape. Suppose they do. What exactly happens next?
You are forever publicly marked as a rapist, no recourse.
You report those accusations as spam, or sue the people… but, the same could be done by an actual rapist… assuming the victims have no proof.
Both outcomes seem bad to me, and I don’t see how to design a system that prevents them both.
(And if we reduce the system to situations when there is a proof… well, then you actually don’t need a mutual rating system, just an app that searches people in the official records.)
Similar for other apps… politicians of the other party will be automatically accused of everything; business competitors will be reported as untrustworthy; people who haven’t even seen your book will give it zero stars rating.
(The last thing is somewhat reduced by Amazon by requiring that the reviewers actually buy the book first. But even then, this makes it a cost/benefit question: you can still give someone X fake negative reviews, in return for spending the proportional amount of money on actually buying their book… without the intention to read it. So you won’t write negative fake reviews for fun, but you can still review-bomb the ones you truly hate.)
I think it’s very much a matter of unit economics. Court systems have a long history of dealing with false accusations, but still managing to uphold some sort of standards around many sorts of activity (murder and abuse, for instance).
When it comes to false accusations; there could be different ways of checking these to verify them. These are common procedures in courts and other respected situations.
If 100 people all opened an application and posted at a similar time, that would be fairly easy to detect, if the organization had reasonable resources. Hacker News and similar deal with similar situations (though obviously much less dramatic) very often with various kinds of spamming attacks and upvote rings.
There’s obviously always going to be some error rate, as is true for court systems.
I think it’s very possible that the possible efforts that would be feasible for us in the next 1-10 years in this area would be too expensive to be worth it, especially because they might be very difficult to raise money for. However, I would hope that abilities here eventually allow for systems that represent much more promising trade-offs.
How would you design a review system that cannot be gamed (very easily)?
For example: Someone sends a message to their 100 friends, and tells them to open the romantic partners app and falsely accuse you of date rape. Suppose they do. What exactly happens next?
You are forever publicly marked as a rapist, no recourse.
You report those accusations as spam, or sue the people… but, the same could be done by an actual rapist… assuming the victims have no proof.
Both outcomes seem bad to me, and I don’t see how to design a system that prevents them both.
(And if we reduce the system to situations when there is a proof… well, then you actually don’t need a mutual rating system, just an app that searches people in the official records.)
Similar for other apps… politicians of the other party will be automatically accused of everything; business competitors will be reported as untrustworthy; people who haven’t even seen your book will give it zero stars rating.
(The last thing is somewhat reduced by Amazon by requiring that the reviewers actually buy the book first. But even then, this makes it a cost/benefit question: you can still give someone X fake negative reviews, in return for spending the proportional amount of money on actually buying their book… without the intention to read it. So you won’t write negative fake reviews for fun, but you can still review-bomb the ones you truly hate.)
I think it’s very much a matter of unit economics. Court systems have a long history of dealing with false accusations, but still managing to uphold some sort of standards around many sorts of activity (murder and abuse, for instance).
When it comes to false accusations; there could be different ways of checking these to verify them. These are common procedures in courts and other respected situations.
If 100 people all opened an application and posted at a similar time, that would be fairly easy to detect, if the organization had reasonable resources. Hacker News and similar deal with similar situations (though obviously much less dramatic) very often with various kinds of spamming attacks and upvote rings.
There’s obviously always going to be some error rate, as is true for court systems.
I think it’s very possible that the possible efforts that would be feasible for us in the next 1-10 years in this area would be too expensive to be worth it, especially because they might be very difficult to raise money for. However, I would hope that abilities here eventually allow for systems that represent much more promising trade-offs.