The first is a point we think a lot about. What is the correlation between what people upvote and what they trust? How does that change when the mechanism changes? And how do you properly signal what it is you trust? And how should that transfer over to other things? Hopefully, the mechanism can be kept simple—but there are ways to tweak it and to introduce more nuance, if that turns out to make it more powerful for users.
On the second point, I’m not sure gaming something like EigenKarma would in most cases be a bad thing. If you want to game the trust graph in such a way that I trust you more—then you have to do things that are trustworthy and valuable, as judged by me or whoever you are trying to game. There is a risk of course that you would try to fool me into trusting you and then exploit me—but I’m not sure EigenKarma significantly increases the risk of that, nor do I have the imagination to figure out what it would mean in practice on the forum here for example.
I am curious about what has (presumably) lead you to discount the “obvious” solution to the first problem. Which is this: When a user upvotes a post they also invest a tiny amount of trust in everyone else who upvoted that same post*. Then if someone who never posts likes all the same things as you do you will tend to see other things they like.
* In detail I would make the time-ordering matter. A spam-bot upvoting a popular post does not gain trust from all the previous upvoters. In order to game the system the spam-bot would need to make an accurate prediction that a post will be wildly popular in the future.
I think this doesn’t work even with time-ordering. A spam bot will probably get to the post first in any case. A bot that simply upvotes everything will gain a huge amount of trust. Even a bot paid only to upvote specific posts will still gain trust if some of those posts are actually good, which it can “use” to gain credibility in its upvotes for the rest of the posts (which may not be good).
You probably also want to do some kind of normalization here based on how many total posts the user has upvoted. (So you can’t just i.e. upvote everything.)
(You probably actually care about something a little different from the accuracy of their upvoted-as-predictions on average though...)
On the second point, I’m not sure gaming something like EigenKarma would in most cases be a bad thing. If you want to game the trust graph in such a way that I trust you more—then you have to do things that are trustworthy and valuable, as judged by me or whoever you are trying to game.
I think that even people who you trust are susceptible to being gamed. I’m not sure if the amount of susceptibility is important though. For example, Reddit is easier to game than LessWrong; LessWrong is gameable to some extent; but is LessWrong gameable to an important extent?
The first is a point we think a lot about. What is the correlation between what people upvote and what they trust? How does that change when the mechanism changes? And how do you properly signal what it is you trust? And how should that transfer over to other things? Hopefully, the mechanism can be kept simple—but there are ways to tweak it and to introduce more nuance, if that turns out to make it more powerful for users.
On the second point, I’m not sure gaming something like EigenKarma would in most cases be a bad thing. If you want to game the trust graph in such a way that I trust you more—then you have to do things that are trustworthy and valuable, as judged by me or whoever you are trying to game. There is a risk of course that you would try to fool me into trusting you and then exploit me—but I’m not sure EigenKarma significantly increases the risk of that, nor do I have the imagination to figure out what it would mean in practice on the forum here for example.
I am curious about what has (presumably) lead you to discount the “obvious” solution to the first problem. Which is this: When a user upvotes a post they also invest a tiny amount of trust in everyone else who upvoted that same post*. Then if someone who never posts likes all the same things as you do you will tend to see other things they like.
* In detail I would make the time-ordering matter. A spam-bot upvoting a popular post does not gain trust from all the previous upvoters. In order to game the system the spam-bot would need to make an accurate prediction that a post will be wildly popular in the future.
There’s a an algorithm called EigenTrust++ that includes both similarity and transitivity in the calculation of one’s reputation score:
https://www.researchgate.net/publication/261093756_EigenTrust_Attack_Resilient_Trust_Management
This feature I would be excited to see implemented!
I think this doesn’t work even with time-ordering. A spam bot will probably get to the post first in any case. A bot that simply upvotes everything will gain a huge amount of trust. Even a bot paid only to upvote specific posts will still gain trust if some of those posts are actually good, which it can “use” to gain credibility in its upvotes for the rest of the posts (which may not be good).
You probably also want to do some kind of normalization here based on how many total posts the user has upvoted. (So you can’t just i.e. upvote everything.) (You probably actually care about something a little different from the accuracy of their upvoted-as-predictions on average though...)
I think that even people who you trust are susceptible to being gamed. I’m not sure if the amount of susceptibility is important though. For example, Reddit is easier to game than LessWrong; LessWrong is gameable to some extent; but is LessWrong gameable to an important extent?