1) If I’m in a thread and someone’s comment is rated equally with mine, and therefore potentially displaying atop my comment, I downvote theirs until it’ll pass mine despite my downvote, to give my comment more exposure. I remove the downvote later, usually upvoting (their comment is getting voted better than mine because it’s good).
2) If I’m debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me. This works best on long debate threads, because a) if my partner’s comments are getting immediately upvoted, they tend to be encouraged and will continue the debate, further exposing themselves to downvotes and b) they get fewer reads, so a single vote up or down makes a much bigger impression when almost all the comments in the thread are rarely upvoted/downvoted past +/- 2.
3) Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don’t want to reward aggressive behavior. Likewise, comments that are not aggressive enough are given extra karma—the reader’s first instinct is to help promote this message because the timid author won’t promote it enough on his own. This is nonsensical in this format, but the instinct is preserved.
I’ve noticed that the comments that get voted up the most are those that do probability calculations, those whose authors’ names pop out of the page, and those which are cynical on the surface, possibly with a wry humor, while revealing a deep earnestness. If you have something unpopular to say, or are just plain losing an argument, that’s the best tone to take, because people will avoid downvoting if they disagree, but will usually upvote if they do agree.
EDIT: I agree with Alicorn that votes shouldn’t be anonymous, as it would remove the dirtiest of these variably dirty techniques, but in the meantime, play to win.
Not me. At least for points 1 and 2, these strategies have occurred to me, but they’re, you know, wrong.
As for point 3, I like that we so strongly discourage aggression. I think that aggression and overconfidence of tone are usually big barriers to rational discussion.
(General “you”) Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment’s aggression with an upvote.
I don’t like that you are trying to mislead others.
“Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever.”—The Black Belt Bayesian
The deception you’ve described is of course minor and maybe you don’t lie about important things. But it seems a dangerous strategy, for your own epistemic hygiene, to be casual with the truth. Even if I didn’t regard it as ethically questionable, I wouldn’t be habitually dishonest for the sake of my own mind.
If I’m debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.
I wouldn’t game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least ‘natural’) then the system would be way too complex for me to continue modeling and learning from.
But in a debate, inasmuch as you’re relying on the community’s consensus to reveal you’re right about something, I would prefer to manipulate that input to make it favor me.
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B’s comment should just be at 0. But then when you down-vote B, their karma goes to −2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
They are using this as peer information about their rationality: People are crazy, the world is mad. Besides, who really considers the average karma voter their peer?
Encouraging a person C to down-vote them: Also, person D who only upvotes because they see someone else already upvoted, so they know they won’t upvote alone.
It isn’t crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
Well, certainly not in those terms, but I’ve seen things along the lines of “EDIT: Am I missing something?” on comments that get downvoted (from a user who isn’t used to being downvoted, generally). Those can have a positive effect.
If my debate partner is willing to change his mind or stop debating because the community disagrees, I want to know that. I also don’t think a) the community’s karma votes represent some sort of evidence of an argument’s rightness or b) that anyone has a right to such evidence that this tactic denies them.
You could make better arguments for your tactic than the ones you are making.
a) the community’s karma votes represent some sort of evidence of an argument’s rightness
It does. Noisy, biased evidence but still evidence. If I am downvoted I will review my position, make sure it is correct and trace out any the likely status related reasons for downvoting that would give an indication on how much truth value I think the votes contain.
Publicly failing in the quantity necessary to maximize your learning growth is very low-status and not many people have the stomach for it.
We have preferences for what we want to experience, and we have preferences for what those preferences are. We prefer to prefer to be wrong, but it’s rare we actually prefer it. Readily admitting you’re wrong is the right decision morally, but practically all it does is incentivize your debate partners to go ad hominem or ignore you.
We prefer to prefer to be wrong, but it’s rare we actually prefer it.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
but practically all it does is incentivize your debate partners to go ad hominem or ignore you
So my options are:
Attempt to manipulate the community into admitting I’m right, or
Eat the emotional consequences of being called names and ignored, in exchange for either honest or visibly inappropriate feedback from my debate partners.
Does this count as honest or visibly inappropriate feedback?
I value 1 over 2. Quality of feedback is, as expected, higher in 2, but comes infrequently enough that I estimate 1 wins out over a long period of time by providing less quality at a higher rate.
My last sentence was a deliberate snark, but it’s “honest” in the sense that I’m attempting to communicate something that I couldn’t find a simpler way to say (roughly: that I think you’re placing too much importance on “feeling right”, and that I dismiss that reaction as not being a “legitimate” motivation in this context).
I have no problem making status-tinged statements if I think they’re productive—I’ll let the community be the judge of their appropriateness. There’s definitely a fine line between efficiency and distraction, I have no delusions of omniscience concerning its location. I’m pretty sure that participation in this community has shaved off a lot of pointless attitude from my approach to online discourse. Feedback is good.
I disagree quantitatively with your specific conclusion concerning quality vs quantity, but I don’t see any structural flaw in your reasoning.
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I’m so unable to handle being wrong that I’m sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.
I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.
This strategy can be eliminated by showing a count of both upvotes and downvotes, a change which has been requested for a variety of other reasons. I imagine it solves a lot of problems of anonymity, but it makes Wei Dai’s dilemma worse. It makes downvoting the −1 preferable to upvoting it.
Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don’t want to reward aggressive behavior [...] This is nonsensical in this format, but the instinct is preserved.
Karma can be (and by your own admission, is) about more than first-order content. Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
(General “you”) Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment’s aggression with an upvote.
Are you speaking descriptively, or normatively? Your “karma is really about” statement led me to believe the latter, but this comment seems to lean toward the former. Could you link to some aggressive comments whose upvotes appear to be driven by status rather than the content they’re replying to?
I don’t recall ever debating with you but knowing your strategy could potentially change the course of future debates. The usual ‘karma management’, and the more general ‘Laws of Power’ would suggest that keeping this strategy to yourself is probably wise. Of course, there are exceptions to that strategy too...
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I’m now worried about a chilling effect on others who do this coming forward.
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I’m now worried about a chilling effect on others who do this coming forward.
I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so… now I’m confused. ;-)
If you have ever used one of bgrah’s techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.
(Since apparently you people think this is a game, You can down vote the comment beneath this so I don’t beat you.)
EDIT: I seriously have to say this? If you don’t like there being a poll vote down the above comment or the karma balancer below. Don’t just screw up the poll out of spite.
If you have ever used one of bgrah’s techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.
I am considering voting up in order to tilt things in favor of making votes de-anonymized. Ironically, as soon as I do so, it’s true..
There is nothing offensive about you having high karma. It is offensive that you you abused a system that a lot of us rely on for evaluating content and encouraging norms that lead to the truth. Truth-seeking is a communal activity and undermining the system that a community uses to find the truth is something we should punish. It’s similar to learning that you had lied in a comment.
I imagine the vast majority of your karma is not ill-gotten, I have no problem with you having it.
Anyway, I haven’t voted you down for precedent setting reasons.
If you have ever suppressed your best judgement on something because you feared the social consequences of not supplicating to the speaker vote this comment up.
My karma management techniques:
1) If I’m in a thread and someone’s comment is rated equally with mine, and therefore potentially displaying atop my comment, I downvote theirs until it’ll pass mine despite my downvote, to give my comment more exposure. I remove the downvote later, usually upvoting (their comment is getting voted better than mine because it’s good).
2) If I’m debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me. This works best on long debate threads, because a) if my partner’s comments are getting immediately upvoted, they tend to be encouraged and will continue the debate, further exposing themselves to downvotes and b) they get fewer reads, so a single vote up or down makes a much bigger impression when almost all the comments in the thread are rarely upvoted/downvoted past +/- 2.
3) Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don’t want to reward aggressive behavior. Likewise, comments that are not aggressive enough are given extra karma—the reader’s first instinct is to help promote this message because the timid author won’t promote it enough on his own. This is nonsensical in this format, but the instinct is preserved.
I’ve noticed that the comments that get voted up the most are those that do probability calculations, those whose authors’ names pop out of the page, and those which are cynical on the surface, possibly with a wry humor, while revealing a deep earnestness. If you have something unpopular to say, or are just plain losing an argument, that’s the best tone to take, because people will avoid downvoting if they disagree, but will usually upvote if they do agree.
EDIT: I agree with Alicorn that votes shouldn’t be anonymous, as it would remove the dirtiest of these variably dirty techniques, but in the meantime, play to win.
Upvoted for honesty.
Of course, I’ll be back in a few days to downvote you.
I can’t believe you actually admitted to using these strategies.
It does make me impressed at his cleverness.
Not me. At least for points 1 and 2, these strategies have occurred to me, but they’re, you know, wrong.
As for point 3, I like that we so strongly discourage aggression. I think that aggression and overconfidence of tone are usually big barriers to rational discussion.
(General “you”) Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment’s aggression with an upvote.
Does that mean you’re not impressed at your own cleverness either? :-)
Since I decided to avoid discussing karma, I’ll keep my thoughts on the rest of your comment to myself. (But you can probably guess what they are.)
I don’t like that you are trying to mislead others.
The deception you’ve described is of course minor and maybe you don’t lie about important things. But it seems a dangerous strategy, for your own epistemic hygiene, to be casual with the truth. Even if I didn’t regard it as ethically questionable, I wouldn’t be habitually dishonest for the sake of my own mind.
To win what? What is there to win?
The same thing you play Tetris or any other game for. Whatever that is.
Cheat codes make games boring.
If there was no game, we wouldn’t keep score.
Your last paragraph was astute.
I found this shocking:
I wouldn’t game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least ‘natural’) then the system would be way too complex for me to continue modeling and learning from.
But in a debate, inasmuch as you’re relying on the community’s consensus to reveal you’re right about something, I would prefer to manipulate that input to make it favor me.
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B’s comment should just be at 0. But then when you down-vote B, their karma goes to −2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
I am insincerely up-voting someone: True.
They are using this as peer information about their rationality: People are crazy, the world is mad. Besides, who really considers the average karma voter their peer?
Encouraging a person C to down-vote them: Also, person D who only upvotes because they see someone else already upvoted, so they know they won’t upvote alone.
It isn’t crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
Well, certainly not in those terms, but I’ve seen things along the lines of “EDIT: Am I missing something?” on comments that get downvoted (from a user who isn’t used to being downvoted, generally). Those can have a positive effect.
Why are you concerned that you win the debate? I’m sure this sounds naive, but surely your concern should be that the truth win the debate?
If my debate partner is willing to change his mind or stop debating because the community disagrees, I want to know that. I also don’t think a) the community’s karma votes represent some sort of evidence of an argument’s rightness or b) that anyone has a right to such evidence that this tactic denies them.
You could make better arguments for your tactic than the ones you are making.
It does. Noisy, biased evidence but still evidence. If I am downvoted I will review my position, make sure it is correct and trace out any the likely status related reasons for downvoting that would give an indication on how much truth value I think the votes contain.
But it’s preferable to be wrong.
For who? Quote from my comment:
We have preferences for what we want to experience, and we have preferences for what those preferences are. We prefer to prefer to be wrong, but it’s rare we actually prefer it. Readily admitting you’re wrong is the right decision morally, but practically all it does is incentivize your debate partners to go ad hominem or ignore you.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
So my options are:
Attempt to manipulate the community into admitting I’m right, or
Eat the emotional consequences of being called names and ignored, in exchange for either honest or visibly inappropriate feedback from my debate partners.
I’ll go with 2. Sorry about your insecurities.
Does this count as honest or visibly inappropriate feedback?
I value 1 over 2. Quality of feedback is, as expected, higher in 2, but comes infrequently enough that I estimate 1 wins out over a long period of time by providing less quality at a higher rate.
My last sentence was a deliberate snark, but it’s “honest” in the sense that I’m attempting to communicate something that I couldn’t find a simpler way to say (roughly: that I think you’re placing too much importance on “feeling right”, and that I dismiss that reaction as not being a “legitimate” motivation in this context).
I have no problem making status-tinged statements if I think they’re productive—I’ll let the community be the judge of their appropriateness. There’s definitely a fine line between efficiency and distraction, I have no delusions of omniscience concerning its location. I’m pretty sure that participation in this community has shaved off a lot of pointless attitude from my approach to online discourse. Feedback is good.
I disagree quantitatively with your specific conclusion concerning quality vs quantity, but I don’t see any structural flaw in your reasoning.
It’s only productive inasmuch as it takes advantage of the halo effect—trying to make your argument look better than it really is. How is that honest?
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I’m so unable to handle being wrong that I’m sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.
This strategy can be eliminated by showing a count of both upvotes and downvotes, a change which has been requested for a variety of other reasons. I imagine it solves a lot of problems of anonymity, but it makes Wei Dai’s dilemma worse. It makes downvoting the −1 preferable to upvoting it.
Karma can be (and by your own admission, is) about more than first-order content. Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
(General “you”) Only if you see the partner who is the target of aggression as your equal. If you get the impression that target is below your status, or deserves to be, you will reward the comment’s aggression with an upvote.
Are you speaking descriptively, or normatively? Your “karma is really about” statement led me to believe the latter, but this comment seems to lean toward the former. Could you link to some aggressive comments whose upvotes appear to be driven by status rather than the content they’re replying to?
Descriptively. I’ll dig some up.
Ding! This is a reminder. It’s been 12 days since you promised to dig some up.
I don’t recall ever debating with you but knowing your strategy could potentially change the course of future debates. The usual ‘karma management’, and the more general ‘Laws of Power’ would suggest that keeping this strategy to yourself is probably wise. Of course, there are exceptions to that strategy too...
I would prefer votes be public, so disseminating my knowledge of how to abuse anonymous scoring makes this more likely.
Good reason.
PS: I would play dirtier if the public karma scores were in place. If I play to win I play to win.
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I’m now worried about a chilling effect on others who do this coming forward.
Also, everyone, see poll below.
I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so… now I’m confused. ;-)
Er, I was expressing my initial emotional reaction, not advocating a policy. Like I said, I’m worried about the chilling effect.
I didn’t even vote down the original comment! Much less destroy him/her.
If you have ever used one of bgrah’s techniques, or some other karma manipulation technique that you believe would be widely frowned upon here vote this comment up.
(Since apparently you people think this is a game, You can down vote the comment beneath this so I don’t beat you.)
EDIT: I seriously have to say this? If you don’t like there being a poll vote down the above comment or the karma balancer below. Don’t just screw up the poll out of spite.
I am considering voting up in order to tilt things in favor of making votes de-anonymized. Ironically, as soon as I do so, it’s true..
If it’s not a game, why punish me? What’s so offensive about me having high karma?
There is nothing offensive about you having high karma. It is offensive that you you abused a system that a lot of us rely on for evaluating content and encouraging norms that lead to the truth. Truth-seeking is a communal activity and undermining the system that a community uses to find the truth is something we should punish. It’s similar to learning that you had lied in a comment.
I imagine the vast majority of your karma is not ill-gotten, I have no problem with you having it.
Anyway, I haven’t voted you down for precedent setting reasons.
It’s a game; people take themselves too seriously sometimes. They also think that their moral system is superior to your moral system.
If you have ever suppressed your best judgement on something because you feared the social consequences of not supplicating to the speaker vote this comment up.
I’m not sure this poll is as anonymous as it should be for maximum accuracy. If votes are ever de-anonymized, someone might swing by and look at this.
Solution: never de-anonymize votes retroactively.
I’ll delete the comment in a couple of weeks, or sooner if karma is de-anonymized.
Karma balancer.