I’d be interested in your work on recommendation systems. How well does it deal with semi-intelligent spammers? That is spammers that copy other normal people’s ratings for the most part but then alter there behaviour to prefer something more than it is worth.
Personally I think a good recommendation system is something that can have a vast impact on society. Mainly through knock-on effects; if you save charity X time and money on deciding which programmer to employ (due to a recommendation system) they can then spend that money on actually helping people.
I’m interested in what you think is more important!
How well does it deal with semi-intelligent spammers?
The most obvious thing distinguishing my work from previous attempts is that it attains all its guarantees even if 99% of all users are arbitrarily sophisticated adversaries. The amount of time depends on the logarithm of the fraction of honest users. So if only one user in 10^10 is honest, it will take about 30 times longer to converge to good recommendations.
The goal is to find the best fixed moderation strategy (ie, block all messages from people X, Y, Z...) for the honest users. Here the definition of honest users is arbitrary: if you choose any subset of the users to declare honest, then over a reasonable time frame the recommendations produced by the recommender system should do as well (using whatever scoring metric you use to give feedback) as the best fixed moderation strategy tailored to those users. Note that the system doesn’t know which users are “honest”: the guarantee holds simultaneously over all choices.
Right now I have a protocol which I believe converges quite quickly (you only get a polylogarithmic number of “bad” recommendations before convergence), but which is computationally completely infeasible (it takes an exponential amount of time to produce a recommendation, where I would like the amount of time to be significantly sub-linear in the number of users). I’m optimistic about reducing the computation time (for example, I have a linear time system which probably works in practice; the question is whether there is any way for an adversary to game the approximations I have to use).
I’d be interested in your work on recommendation systems. How well does it deal with semi-intelligent spammers? That is spammers that copy other normal people’s ratings for the most part but then alter there behaviour to prefer something more than it is worth.
Personally I think a good recommendation system is something that can have a vast impact on society. Mainly through knock-on effects; if you save charity X time and money on deciding which programmer to employ (due to a recommendation system) they can then spend that money on actually helping people.
I’m interested in what you think is more important!
The most obvious thing distinguishing my work from previous attempts is that it attains all its guarantees even if 99% of all users are arbitrarily sophisticated adversaries. The amount of time depends on the logarithm of the fraction of honest users. So if only one user in 10^10 is honest, it will take about 30 times longer to converge to good recommendations.
The goal is to find the best fixed moderation strategy (ie, block all messages from people X, Y, Z...) for the honest users. Here the definition of honest users is arbitrary: if you choose any subset of the users to declare honest, then over a reasonable time frame the recommendations produced by the recommender system should do as well (using whatever scoring metric you use to give feedback) as the best fixed moderation strategy tailored to those users. Note that the system doesn’t know which users are “honest”: the guarantee holds simultaneously over all choices.
Right now I have a protocol which I believe converges quite quickly (you only get a polylogarithmic number of “bad” recommendations before convergence), but which is computationally completely infeasible (it takes an exponential amount of time to produce a recommendation, where I would like the amount of time to be significantly sub-linear in the number of users). I’m optimistic about reducing the computation time (for example, I have a linear time system which probably works in practice; the question is whether there is any way for an adversary to game the approximations I have to use).
If you want another pair of eyeballs, I’ll have a look.