Increases the visibility of the upvoted post to all users
EigenKarma:
Provides positive or negative feedback to OP
Increases visibility to users who assign you high EigenKarma
Increases visibility of upvoted post’s EigenKarma network to you
So EigenKarma improves your ability to decouple signal-boosting and giving positive feedback.
However, it enforces coupling between giving positive feedback and which posts are most visible to you.
I think the advantage of EigenKarma over normal karma is that normal karma allows you to “inflict” a post’s visibility on other users. EigenKarma inflicts visibility of a broad range of posts on yourself, and those who’ve inflicted upon themselves the results of your voting choices.
Although the latter seems at least superficially preferable from a standpoint of incentivizing responsible voting, it also results in a potentially problematic lack of transparency if there’s not a strong enough correlation between what people upvote and what people post. Perhaps many people who write good posts you’d like to see more of also upvote a lot of dumb memes. That makes it hard to increase the visibility of good posts without also increasing the visibility of dumb memes.
I agree with Dagon: it seems better to split “giving positive feedback” from “increasing visibility of their feed.” The latter is something I might want to do even for somebody who never posts anything, while the former is something I might want to do for all sorts of reasons that have nothing to do with what I want to view in the future.
Right now, it seems there are ways to implement “increasing visibility of somebody else’s feed.” Many sites let you view what accounts or subforums somebody is following, and to choose to follow them. Sometimes that functionality is buried, not convenient to use, or hard to get feedback from. I could imagine a social media site that is centrally focused on exploring other users’ visibility networks and tinkering with your feed based on that information.
At baseline, though, it seems like you’d need some way for somebody to ultimately say “I like this content and I’d like to see more of it.” But it does seem possible to just have two upvote buttons, one to give positive feedback and the other to increase visibility.
It is an open question to me how correlated user writing good posts (or doing other type of valuable work) and their tendency to signal boost bad things (like stupid memes). My personal experience is that there is a strong correlation between what people consume and what they produce—if I see someone signal boost low quality information, I take that as a sign of unsound epistemic practices, and will generally take care to reduce their visibility. (On Twitter, for example, I would unfollow them.)
There are ways to make EigenKarma more finegrained so you can hand out different types of upvotes, too. Which can be used to decouple things. On the dev discord, we are experimenting with giving upvotes flavors, so you can finetune what it is the thing you upvoted made you trust more about the person (is it their skill as a dev? is it their capacity to do research?). Figuring out the design for this, and if it is to complicated, is an open question right now in my mind.
I agree—I’m uncertain about what it would be like to use it in practice, but I think it’s great that you’re experimenting with new technology for handling this type of issue. If it were convenient to test drive the feature, especially in an academic research context where I have the biggest and most important search challenges, I’d be interested to try it out.
This sounds like it could easily end up with the same catastrophic flaw as recsys. Most users will want to upvoted posts they agree with. So this creates self reinforcing “cliques” where everyone sees only more content from the set of users they already agree with, strengthening their belief that the ground truth reality is what they want it to be, and so on.
Yeah, this seems like it fundamentally springs from “people don’t always want what’s good for them/society.” Hard to design a system to enforce epistemic rigor on an unwilling user base.
Stated another way:
Normal karma:
Provides positive or negative feedback to OP
Increases the visibility of the upvoted post to all users
EigenKarma:
Provides positive or negative feedback to OP
Increases visibility to users who assign you high EigenKarma
Increases visibility of upvoted post’s EigenKarma network to you
So EigenKarma improves your ability to decouple signal-boosting and giving positive feedback.
However, it enforces coupling between giving positive feedback and which posts are most visible to you.
I think the advantage of EigenKarma over normal karma is that normal karma allows you to “inflict” a post’s visibility on other users. EigenKarma inflicts visibility of a broad range of posts on yourself, and those who’ve inflicted upon themselves the results of your voting choices.
Although the latter seems at least superficially preferable from a standpoint of incentivizing responsible voting, it also results in a potentially problematic lack of transparency if there’s not a strong enough correlation between what people upvote and what people post. Perhaps many people who write good posts you’d like to see more of also upvote a lot of dumb memes. That makes it hard to increase the visibility of good posts without also increasing the visibility of dumb memes.
I agree with Dagon: it seems better to split “giving positive feedback” from “increasing visibility of their feed.” The latter is something I might want to do even for somebody who never posts anything, while the former is something I might want to do for all sorts of reasons that have nothing to do with what I want to view in the future.
Right now, it seems there are ways to implement “increasing visibility of somebody else’s feed.” Many sites let you view what accounts or subforums somebody is following, and to choose to follow them. Sometimes that functionality is buried, not convenient to use, or hard to get feedback from. I could imagine a social media site that is centrally focused on exploring other users’ visibility networks and tinkering with your feed based on that information.
At baseline, though, it seems like you’d need some way for somebody to ultimately say “I like this content and I’d like to see more of it.” But it does seem possible to just have two upvote buttons, one to give positive feedback and the other to increase visibility.
It is an open question to me how correlated user writing good posts (or doing other type of valuable work) and their tendency to signal boost bad things (like stupid memes). My personal experience is that there is a strong correlation between what people consume and what they produce—if I see someone signal boost low quality information, I take that as a sign of unsound epistemic practices, and will generally take care to reduce their visibility. (On Twitter, for example, I would unfollow them.)
There are ways to make EigenKarma more finegrained so you can hand out different types of upvotes, too. Which can be used to decouple things. On the dev discord, we are experimenting with giving upvotes flavors, so you can finetune what it is the thing you upvoted made you trust more about the person (is it their skill as a dev? is it their capacity to do research?). Figuring out the design for this, and if it is to complicated, is an open question right now in my mind.
I agree—I’m uncertain about what it would be like to use it in practice, but I think it’s great that you’re experimenting with new technology for handling this type of issue. If it were convenient to test drive the feature, especially in an academic research context where I have the biggest and most important search challenges, I’d be interested to try it out.
This sounds like it could easily end up with the same catastrophic flaw as recsys. Most users will want to upvoted posts they agree with. So this creates self reinforcing “cliques” where everyone sees only more content from the set of users they already agree with, strengthening their belief that the ground truth reality is what they want it to be, and so on.
Yeah, this seems like it fundamentally springs from “people don’t always want what’s good for them/society.” Hard to design a system to enforce epistemic rigor on an unwilling user base.