There’s a big stigma now against platforms to give evaluations or ratings on individuals or organizations along various dimensions. See the rating episode of Black Mirror, or the discussion on the Chinese credit system.
I feel like this could be a bit of a missed opportunity. This sort of technology is easy to do destructively, but there are a huge number of benefits if it can be done well.
We already have credit scores, resumes (which are effectively scores), and social media metrics. All of these are really crude.
Some examples of things that could be possible:
Romantic partners could be screened to make sure they are very unlikely to be physically abusive.
Politicians could be much more intensely ranked on different dimensions, and their bad behaviors listed.
People who might seem sketchy to some (perhaps because they are a bit racist), could be revealed to be good-intentioned and harmless.
People who are likely to steal things could be restricted from entering certain public spaces. This would allow for much more high-trust environments. For example, more situations where customers are trusted to just pay the right amounts on their own.
People can be subtly motivated to just be nicer to each other, in situations where they are unlikely to see each other again.
Most business deals could do checks for the trustworthiness of the different actors. It really should become near-impossible to have a career of repeated scams.
These sorts of evaluation systems basically can promote the values of those in charge of them. If those in charge of them are effectively the public (as opposed to a corrupt government agency), this could wind up turning out nicely.
If done well, algorithms should be able to help us transition to much higher-trust societies.
How would you design a review system that cannot be gamed (very easily)?
For example: Someone sends a message to their 100 friends, and tells them to open the romantic partners app and falsely accuse you of date rape. Suppose they do. What exactly happens next?
You are forever publicly marked as a rapist, no recourse.
You report those accusations as spam, or sue the people… but, the same could be done by an actual rapist… assuming the victims have no proof.
Both outcomes seem bad to me, and I don’t see how to design a system that prevents them both.
(And if we reduce the system to situations when there is a proof… well, then you actually don’t need a mutual rating system, just an app that searches people in the official records.)
Similar for other apps… politicians of the other party will be automatically accused of everything; business competitors will be reported as untrustworthy; people who haven’t even seen your book will give it zero stars rating.
(The last thing is somewhat reduced by Amazon by requiring that the reviewers actually buy the book first. But even then, this makes it a cost/benefit question: you can still give someone X fake negative reviews, in return for spending the proportional amount of money on actually buying their book… without the intention to read it. So you won’t write negative fake reviews for fun, but you can still review-bomb the ones you truly hate.)
I think it’s very much a matter of unit economics. Court systems have a long history of dealing with false accusations, but still managing to uphold some sort of standards around many sorts of activity (murder and abuse, for instance).
When it comes to false accusations; there could be different ways of checking these to verify them. These are common procedures in courts and other respected situations.
If 100 people all opened an application and posted at a similar time, that would be fairly easy to detect, if the organization had reasonable resources. Hacker News and similar deal with similar situations (though obviously much less dramatic) very often with various kinds of spamming attacks and upvote rings.
There’s obviously always going to be some error rate, as is true for court systems.
I think it’s very possible that the possible efforts that would be feasible for us in the next 1-10 years in this area would be too expensive to be worth it, especially because they might be very difficult to raise money for. However, I would hope that abilities here eventually allow for systems that represent much more promising trade-offs.
There’s a big stigma now against platforms to give evaluations or ratings on individuals or organizations along various dimensions. See the rating episode of Black Mirror, or the discussion on the Chinese credit system.
I feel like this could be a bit of a missed opportunity. This sort of technology is easy to do destructively, but there are a huge number of benefits if it can be done well.
We already have credit scores, resumes (which are effectively scores), and social media metrics. All of these are really crude.
Some examples of things that could be possible:
Romantic partners could be screened to make sure they are very unlikely to be physically abusive.
Politicians could be much more intensely ranked on different dimensions, and their bad behaviors listed.
People who might seem sketchy to some (perhaps because they are a bit racist), could be revealed to be good-intentioned and harmless.
People who are likely to steal things could be restricted from entering certain public spaces. This would allow for much more high-trust environments. For example, more situations where customers are trusted to just pay the right amounts on their own.
People can be subtly motivated to just be nicer to each other, in situations where they are unlikely to see each other again.
Most business deals could do checks for the trustworthiness of the different actors. It really should become near-impossible to have a career of repeated scams.
These sorts of evaluation systems basically can promote the values of those in charge of them. If those in charge of them are effectively the public (as opposed to a corrupt government agency), this could wind up turning out nicely. If done well, algorithms should be able to help us transition to much higher-trust societies.
How would you design a review system that cannot be gamed (very easily)?
For example: Someone sends a message to their 100 friends, and tells them to open the romantic partners app and falsely accuse you of date rape. Suppose they do. What exactly happens next?
You are forever publicly marked as a rapist, no recourse.
You report those accusations as spam, or sue the people… but, the same could be done by an actual rapist… assuming the victims have no proof.
Both outcomes seem bad to me, and I don’t see how to design a system that prevents them both.
(And if we reduce the system to situations when there is a proof… well, then you actually don’t need a mutual rating system, just an app that searches people in the official records.)
Similar for other apps… politicians of the other party will be automatically accused of everything; business competitors will be reported as untrustworthy; people who haven’t even seen your book will give it zero stars rating.
(The last thing is somewhat reduced by Amazon by requiring that the reviewers actually buy the book first. But even then, this makes it a cost/benefit question: you can still give someone X fake negative reviews, in return for spending the proportional amount of money on actually buying their book… without the intention to read it. So you won’t write negative fake reviews for fun, but you can still review-bomb the ones you truly hate.)
I think it’s very much a matter of unit economics. Court systems have a long history of dealing with false accusations, but still managing to uphold some sort of standards around many sorts of activity (murder and abuse, for instance).
When it comes to false accusations; there could be different ways of checking these to verify them. These are common procedures in courts and other respected situations.
If 100 people all opened an application and posted at a similar time, that would be fairly easy to detect, if the organization had reasonable resources. Hacker News and similar deal with similar situations (though obviously much less dramatic) very often with various kinds of spamming attacks and upvote rings.
There’s obviously always going to be some error rate, as is true for court systems.
I think it’s very possible that the possible efforts that would be feasible for us in the next 1-10 years in this area would be too expensive to be worth it, especially because they might be very difficult to raise money for. However, I would hope that abilities here eventually allow for systems that represent much more promising trade-offs.