If I’m debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me.
I wouldn’t game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least ‘natural’) then the system would be way too complex for me to continue modeling and learning from.
But in a debate, inasmuch as you’re relying on the community’s consensus to reveal you’re right about something, I would prefer to manipulate that input to make it favor me.
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B’s comment should just be at 0. But then when you down-vote B, their karma goes to −2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
They are using this as peer information about their rationality: People are crazy, the world is mad. Besides, who really considers the average karma voter their peer?
Encouraging a person C to down-vote them: Also, person D who only upvotes because they see someone else already upvoted, so they know they won’t upvote alone.
It isn’t crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
Well, certainly not in those terms, but I’ve seen things along the lines of “EDIT: Am I missing something?” on comments that get downvoted (from a user who isn’t used to being downvoted, generally). Those can have a positive effect.
If my debate partner is willing to change his mind or stop debating because the community disagrees, I want to know that. I also don’t think a) the community’s karma votes represent some sort of evidence of an argument’s rightness or b) that anyone has a right to such evidence that this tactic denies them.
You could make better arguments for your tactic than the ones you are making.
a) the community’s karma votes represent some sort of evidence of an argument’s rightness
It does. Noisy, biased evidence but still evidence. If I am downvoted I will review my position, make sure it is correct and trace out any the likely status related reasons for downvoting that would give an indication on how much truth value I think the votes contain.
Publicly failing in the quantity necessary to maximize your learning growth is very low-status and not many people have the stomach for it.
We have preferences for what we want to experience, and we have preferences for what those preferences are. We prefer to prefer to be wrong, but it’s rare we actually prefer it. Readily admitting you’re wrong is the right decision morally, but practically all it does is incentivize your debate partners to go ad hominem or ignore you.
We prefer to prefer to be wrong, but it’s rare we actually prefer it.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
but practically all it does is incentivize your debate partners to go ad hominem or ignore you
So my options are:
Attempt to manipulate the community into admitting I’m right, or
Eat the emotional consequences of being called names and ignored, in exchange for either honest or visibly inappropriate feedback from my debate partners.
Does this count as honest or visibly inappropriate feedback?
I value 1 over 2. Quality of feedback is, as expected, higher in 2, but comes infrequently enough that I estimate 1 wins out over a long period of time by providing less quality at a higher rate.
My last sentence was a deliberate snark, but it’s “honest” in the sense that I’m attempting to communicate something that I couldn’t find a simpler way to say (roughly: that I think you’re placing too much importance on “feeling right”, and that I dismiss that reaction as not being a “legitimate” motivation in this context).
I have no problem making status-tinged statements if I think they’re productive—I’ll let the community be the judge of their appropriateness. There’s definitely a fine line between efficiency and distraction, I have no delusions of omniscience concerning its location. I’m pretty sure that participation in this community has shaved off a lot of pointless attitude from my approach to online discourse. Feedback is good.
I disagree quantitatively with your specific conclusion concerning quality vs quantity, but I don’t see any structural flaw in your reasoning.
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I’m so unable to handle being wrong that I’m sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.
Your last paragraph was astute.
I found this shocking:
I wouldn’t game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least ‘natural’) then the system would be way too complex for me to continue modeling and learning from.
But in a debate, inasmuch as you’re relying on the community’s consensus to reveal you’re right about something, I would prefer to manipulate that input to make it favor me.
I thought about it further, and decided that I would have moral qualms about it. First, you are insincerely up-voting someone, and they are using this as peer information about their rationality. Second, you are encouraging a person C to down-vote them (person B) if they think person B’s comment should just be at 0. But then when you down-vote B, their karma goes to −2, which person C did not intend to do with his vote.
So I think this policy is just adding noise to the system, which is not consistent with the LW norm of wanting a high signal to noise ratio.
I am insincerely up-voting someone: True.
They are using this as peer information about their rationality: People are crazy, the world is mad. Besides, who really considers the average karma voter their peer?
Encouraging a person C to down-vote them: Also, person D who only upvotes because they see someone else already upvoted, so they know they won’t upvote alone.
It isn’t crazy or mad to consider people who vote on your comments as on average equal to you in rationality. Quite the opposite: if each of us assumes that we are more rational than those who vote, this will be like everyone thinking that he is above average in driving ability or whatever.
And in fact, many people do use this information: numerous times someone has said something like, “Since my position is against community consensus I think I will have to modify it,” or something along these lines.
Well, certainly not in those terms, but I’ve seen things along the lines of “EDIT: Am I missing something?” on comments that get downvoted (from a user who isn’t used to being downvoted, generally). Those can have a positive effect.
Why are you concerned that you win the debate? I’m sure this sounds naive, but surely your concern should be that the truth win the debate?
If my debate partner is willing to change his mind or stop debating because the community disagrees, I want to know that. I also don’t think a) the community’s karma votes represent some sort of evidence of an argument’s rightness or b) that anyone has a right to such evidence that this tactic denies them.
You could make better arguments for your tactic than the ones you are making.
It does. Noisy, biased evidence but still evidence. If I am downvoted I will review my position, make sure it is correct and trace out any the likely status related reasons for downvoting that would give an indication on how much truth value I think the votes contain.
But it’s preferable to be wrong.
For who? Quote from my comment:
We have preferences for what we want to experience, and we have preferences for what those preferences are. We prefer to prefer to be wrong, but it’s rare we actually prefer it. Readily admitting you’re wrong is the right decision morally, but practically all it does is incentivize your debate partners to go ad hominem or ignore you.
Well, if I prefer to prefer being wrong, then I plan ahead accordingly, which includes a policy against ridiculous karma games motivated by fleeting emotional reactions.
So my options are:
Attempt to manipulate the community into admitting I’m right, or
Eat the emotional consequences of being called names and ignored, in exchange for either honest or visibly inappropriate feedback from my debate partners.
I’ll go with 2. Sorry about your insecurities.
Does this count as honest or visibly inappropriate feedback?
I value 1 over 2. Quality of feedback is, as expected, higher in 2, but comes infrequently enough that I estimate 1 wins out over a long period of time by providing less quality at a higher rate.
My last sentence was a deliberate snark, but it’s “honest” in the sense that I’m attempting to communicate something that I couldn’t find a simpler way to say (roughly: that I think you’re placing too much importance on “feeling right”, and that I dismiss that reaction as not being a “legitimate” motivation in this context).
I have no problem making status-tinged statements if I think they’re productive—I’ll let the community be the judge of their appropriateness. There’s definitely a fine line between efficiency and distraction, I have no delusions of omniscience concerning its location. I’m pretty sure that participation in this community has shaved off a lot of pointless attitude from my approach to online discourse. Feedback is good.
I disagree quantitatively with your specific conclusion concerning quality vs quantity, but I don’t see any structural flaw in your reasoning.
It’s only productive inasmuch as it takes advantage of the halo effect—trying to make your argument look better than it really is. How is that honest?
But how can you have any self-respect, knowing that you prefer to feel right than be right? For me, the feeling of being being wrong is much less-bad than believing I’m so unable to handle being wrong that I’m sabotaging the beliefs of myself and those around me. I would regard myself as pathetic, if I made decisions like that.