I don’t think it’s a problem that people can get karma by posting a bunch? The only reward a user gets for having tons of karma is that their votes are worth a bit more; I don’t know the exact formula, but I don’t expect it to be so egregious that it would be worth farming karma for.
And it’s certainly not the intention on the content-agnostic Less Wrong website that alignment posts should somehow be privileged over other content; that’s what the alignment forum is there for.
As I understand it, just like on Reddit, the primary goal of the karma system is for content discoverability—highly upvoted content stays on the frontpage for longer and is seen by more people; and similarly, highly upvoted comments are sorted above less upvoted comments. Upvoting something means stuff like “I like this”, “I agree with this”, “I want more people to see this”, etc. However, this breaks down when people e.g. want to indicate their appreciation (like an act of courage of speaking out), even if they believe the content is low quality or something. In that case, it seems like one voting axis is obviously not enough.
I understand that sockpuppeting and vote manipulation is a big problem on Reddit, but why do you think it is a relevant problem on LW? I’d expect this kind of thing to only become an important problem if LW were to get orders of magnitude more users.
The only reward a user gets for having tons of karma is that their votes are worth a bit more
The only formal reward. A number going up is its own reward to most people. This causes content to tend closer to consensus: content people write becomes a Keynesian beauty contest over how they think people will vote. If you think that Preference Falsification is one of the major issues of our time, this is obviously bad.
why do you think it is a relevant problem on LW?
I mentioned the Eugene Nier case, where a person did Extreme Botting to manipulate the scores of people he didn’t like, which drove away a bunch of posters. (The second was redacted for a reason.)
I don’t think it’s a problem that people can get karma by posting a bunch? The only reward a user gets for having tons of karma is that their votes are worth a bit more; I don’t know the exact formula, but I don’t expect it to be so egregious that it would be worth farming karma for.
And it’s certainly not the intention on the content-agnostic Less Wrong website that alignment posts should somehow be privileged over other content; that’s what the alignment forum is there for.
As I understand it, just like on Reddit, the primary goal of the karma system is for content discoverability—highly upvoted content stays on the frontpage for longer and is seen by more people; and similarly, highly upvoted comments are sorted above less upvoted comments. Upvoting something means stuff like “I like this”, “I agree with this”, “I want more people to see this”, etc. However, this breaks down when people e.g. want to indicate their appreciation (like an act of courage of speaking out), even if they believe the content is low quality or something. In that case, it seems like one voting axis is obviously not enough.
I understand that sockpuppeting and vote manipulation is a big problem on Reddit, but why do you think it is a relevant problem on LW? I’d expect this kind of thing to only become an important problem if LW were to get orders of magnitude more users.
The only formal reward. A number going up is its own reward to most people. This causes content to tend closer to consensus: content people write becomes a Keynesian beauty contest over how they think people will vote. If you think that Preference Falsification is one of the major issues of our time, this is obviously bad.
I mentioned the Eugene Nier case, where a person did Extreme Botting to manipulate the scores of people he didn’t like, which drove away a bunch of posters. (The second was redacted for a reason.)