It is an open question to me how correlated user writing good posts (or doing other type of valuable work) and their tendency to signal boost bad things (like stupid memes). My personal experience is that there is a strong correlation between what people consume and what they produce—if I see someone signal boost low quality information, I take that as a sign of unsound epistemic practices, and will generally take care to reduce their visibility. (On Twitter, for example, I would unfollow them.)
There are ways to make EigenKarma more finegrained so you can hand out different types of upvotes, too. Which can be used to decouple things. On the dev discord, we are experimenting with giving upvotes flavors, so you can finetune what it is the thing you upvoted made you trust more about the person (is it their skill as a dev? is it their capacity to do research?). Figuring out the design for this, and if it is to complicated, is an open question right now in my mind.
I agree—I’m uncertain about what it would be like to use it in practice, but I think it’s great that you’re experimenting with new technology for handling this type of issue. If it were convenient to test drive the feature, especially in an academic research context where I have the biggest and most important search challenges, I’d be interested to try it out.
It is an open question to me how correlated user writing good posts (or doing other type of valuable work) and their tendency to signal boost bad things (like stupid memes). My personal experience is that there is a strong correlation between what people consume and what they produce—if I see someone signal boost low quality information, I take that as a sign of unsound epistemic practices, and will generally take care to reduce their visibility. (On Twitter, for example, I would unfollow them.)
There are ways to make EigenKarma more finegrained so you can hand out different types of upvotes, too. Which can be used to decouple things. On the dev discord, we are experimenting with giving upvotes flavors, so you can finetune what it is the thing you upvoted made you trust more about the person (is it their skill as a dev? is it their capacity to do research?). Figuring out the design for this, and if it is to complicated, is an open question right now in my mind.
I agree—I’m uncertain about what it would be like to use it in practice, but I think it’s great that you’re experimenting with new technology for handling this type of issue. If it were convenient to test drive the feature, especially in an academic research context where I have the biggest and most important search challenges, I’d be interested to try it out.