That’s an awesome idea. Maybe amend it to “downvote spam, otherwise vote everything toward 0” so a minority of politically-motivated voters can’t spoil the game for everyone else?
In addition to my other comment, I think it will be hard to enforce a voting norm that is so inconsistent with the voting norms on the rest of the site.
Disagree, there are successful instances of using karma in ways inconsistent with the rest of the site.
The most important counterexample here is Will Newsome’s Irrationality Game post, where voting norms were reversed: the weirdest/most irrational beliefs were upvoted the most, and the most sensible/agreeable beliefs were downvoted into invisibility. Many of the comments in that thread, especially the highest-voted, have disclaimers indicating that they operate according to a different voting metric. There is no obvious indication that anyone was confused or malicious with regard to the changed local norm.
Mm. I sometimes upvote for things I think are good ideas, as an efficient alternative to a comment saying “Yes, that’s right.” I sometimes downvote for things I think are bad ideas, as an alternative to a comment saying “Nope, that’s wrong.” While I would agree that in the latter case a downvote isn’t as good as a more detailed comment explaining why something is wrong, I do think it’s better than nothing.
So, consider this an opportunity to convince someone to your position on downvotes, if you want to: why ought I change my behavior?
Voting is there to encourage/discourage some kinds of comments. We don’t want people to not make comments just because we disagree with their contents, so we shouldn’t downvote comments for disagreement.
If someone makes a good, well-reasoned comment in favor of a position I disagree with, that merits an upvote and a response.
It might be nice to have a mechanism for voting “agree/disagree” in addition to “high quality / low quality” (as I proposed 3 years ago), but in the absence of such a mechanism we should avoid mixing our signals.
The comments that float to the top should be the highest-quality, not the ones most in line with the Lw party line.
And people should be rewarded for making high-quality comments and punished for making low-quality comments, not rewarded for expressing popular opinions and punished for expressing unpopular opinions.
I agree that good, well-reasoned comments don’t merit downvotes, even if I disagree with the position they support. I agree that merely unpopular opinions don’t merit downvotes. I agree that low-quality comments in line with the LW party line don’t merit upvotes. I agree that merely popular opinions don’t merit upvotes. I agree that voting is there to encourage and discourage some kinds of comments.
What’s your position on downvoting a neither-spectacularly-well-or-poorly-written comment expressing an idea that’s simply false?
I don’t think that type of comment should be downvoted except when the author can’t take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won’t prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness. Stupidity (the tendency toward unwitting self-destruction) is what we’re trying to avoid here, right? We’re trying to avoid losing. Willful ignorance of the truth is an especially damaging form of stupidity.
Two highly intelligent people will not likely come to a completely different and antithetical viewpoint if both are reasonably intelligent. Thus, the very well-written but false viewpoint is far more damaging than the clearly stupid false viewpoint. If this site helps people avoid damaging their property (their brain, their bodies, their material possessions), or minimizes systemic damage to those things, then it’s more highly functional, and the value is apparent even to casual observers.
Such a value is sure to be adopted and become “market standard.” That seems like the best possible outcome, to me.
So, if a comment is seemingly very well-reasoned, but false, it will actually help to expand irrationality. Moreover, it’s more costly to address the idea, because it “seems legit.” Thus, to not sound like a jerk, you have to expend energy on politeness and form that could normally be spent on addressing substance.
HIV tricks the body into believing it’s harmless by continually changing and “living to fight another day.” If it was a more obvious threat, it would be identified and killed. I’d rather have a sudden flu that makes me clearly sick, but that my body successfully kills, than HIV that allows me to seem fine, but slowly kills me in 10 years. The well-worded but false argument is like a virus that slips past your body’s defenses or neutralizes them. That’s worse than a clearly dangerous poison because it isn’t obviously dangerous.
False ideas are most dangerous when they seem to be true. Moreover, such ideas won’t seem to be true to smart people. It’s enough for them to seem true to 51% of voters.
If 51% of voters can’t find fault with a false idea, it can be as damaging as “the state should own and control all property.” Result: millions murdered (and we still dare not talk about it, lest we be accused of being “mind killed” or “rooting for team A to the detriment of team B”—as if avoiding mass murder weren’t enough of a reason for rooting for a properly-identified “right team”).
Now, what if there’s a reasonable disagreement, from people who know differen things? Then evidence should be presented, and the final winner should become clear, or a vital area where further study is needed can be identified.
If reality is objective, but humans are highly subjective creatures due to limited brain (neocortex) size, then argument is a good way to make progress toward a Lesswrong site that exhibits emergent intelligence.
I think that’s a good way to use the site. I would prefer to have my interactions with this site lead me to undiscovered truths. If absolutely everyone here believes in the “zero universes” theory, then I’ll watch more “Google tech talks” and read more white papers on the subject, allocating more of my time to comprehending it. If everyone here says it’s a toss-up between that and the multiverse theory, or “NOTA.,” I might allocate my time to an entirely different and “more likely to yield results” subject.
In any case, there is an objective reality that all of us share “common ground” with. Thus, false arguments that appear well reasoned are always poorly-reasoned, to some extent. They are always a combination of thousands of variables. Upranking or downranking is a means for indicating which variables we think are more important, and which ones we think are true or false.
The goal should always be an optimal outcome, including an optimal prioritization.
If you have the best recipe ever for a stevia-sweetened milkshake, and your argument is true, valid, good, and I make the milkshake and I think it’s the best thing ever, and it contains other healthy ingredients that I think will help me live longer, then that’s a rational goal. I’m drinking something tasty, and living longer, etc. However, if I downvote a comment because I don’t want Lesswrong to turn into a recipe-posting board, that might be more rational.
What’s the greatest purpose to which a tool can be used? True, I can use my pistol to hammer in nails, but if I do that, and I eventually need a pistol to defend my life, I might not have it, due to years of abuse or “sub-optimal use.” Also, if I survive attacks against me, I can buy a hammer.
A Lesswrong “upvote” contains an approximation of all of that. Truth, utility, optimality, prioritization, importance, relevance to community, etc. Truth is a kind of utility. If we didn’t care about utility, we might discuss purely provincial interests. However: Lesswrong is interested in eliminating bad thinking, and it thus makes sense to start with the worst of thinking around which there is the least “wiggle room.”
If I have facial hair (or am gay), Ayn Rand followers might not like me. Ayn Rand often defended capitalism. By choosing to distance herself from people over their facial hair, she failed to prioritize her views rationally, and to perceive how others would shape her views into a cult through their extended lack of proper prioritization. So, in some ways, Rand, (like the still worse Reagan) helped to delegitimize capitalism. Still, if you read what she wrote about capitalism, she was 100% right, and if you read what she wrote about facial hair, she was 100% superficial and doltish. So, on an Ayn Rand forum, if someone begins defending Rand’s disapproval of facial hair, I might point out that in 2006 the USA experienced a systemic shock to its fiat currency system, and try to direct the conversation to more important matters.
I might also suggest leaving the discussions of facial hair to Western wear discussion boards.
It’s vital to ALWAYS include an indication of how important a subject is. That’s how marketplaces of ideas focus their trading.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness.
Well, to the extent of its net destructiveness… that is, the difference between the destructiveness of the idea as it manifests in the specific comment, and the destructiveness of downvoting it.
But with that caveat, sure, I expect that’s true.
That said, the net destructiveness of most of the false ideas I see here is pretty low, so this isn’t a rule that is often relevant to my voting behavior. Other considerations generally swamp it.
That said, I have to admit I did not read this comment all the way through. Were it not a response to me, which I make a habit of not voting on, I would have downvoted it for its incoherent wall-of-text nature.
To call “don’t downvote if I’m in the conversation” a local norm might be overstating the case. I’ve heard several people assert this about their own behavior, and there are good reasons for it (and equally good reasons for not upvoting if I’m in the conversation), but my own position is more “distrust the impulse to vote on something I’m emotionally engaged with.”
“I am free, no matter what rules surround me. If I find them tolerable, I tolerate them; if I find them too obnoxious, I break them. I am free because I know that I alone am morally responsible for everything I do.”
― Robert A. Heinlein
(There’s no way to break the rule on posting too fast. That’s one I’d break. Because yeah, we ought not to be able to come close to thinking as fast as our hands can type. What a shame that would be. …Or can a well-filtered internet forum—which prides itself on being well-filtered—have “too much information”)
There’s no fallacy of gray in there. Since votes count just as much in the thread, and our votes will be much more noisy, it would often be best to refrain from voting there. If anything, I might have expected to be accused of the opposite fallacy.
Downvote spam, but otherwise avoid voting up or down—we’re likely to be voting for biased reasons.
That’s an awesome idea. Maybe amend it to “downvote spam, otherwise vote everything toward 0” so a minority of politically-motivated voters can’t spoil the game for everyone else?
In addition to my other comment, I think it will be hard to enforce a voting norm that is so inconsistent with the voting norms on the rest of the site.
Disagree, there are successful instances of using karma in ways inconsistent with the rest of the site.
The most important counterexample here is Will Newsome’s Irrationality Game post, where voting norms were reversed: the weirdest/most irrational beliefs were upvoted the most, and the most sensible/agreeable beliefs were downvoted into invisibility. Many of the comments in that thread, especially the highest-voted, have disclaimers indicating that they operate according to a different voting metric. There is no obvious indication that anyone was confused or malicious with regard to the changed local norm.
Hmm. I like the idea that expressing an idea well is rewarded, which your suggestion doesn’t allow. Trying to figure out how to decide between them.
Hmm. How about:
Spam is not engagement, but the poster whose posting led to this discussion post was not really interested in a discussion.
Sounds good. Has a side-effect of there being a perceived cost for posting in the thread; you’re more likely to be downvoted.
I generally counsel not downvoting for disagreement anywhere on the site. I think this needs to be stronger.
Mm. I sometimes upvote for things I think are good ideas, as an efficient alternative to a comment saying “Yes, that’s right.” I sometimes downvote for things I think are bad ideas, as an alternative to a comment saying “Nope, that’s wrong.” While I would agree that in the latter case a downvote isn’t as good as a more detailed comment explaining why something is wrong, I do think it’s better than nothing.
So, consider this an opportunity to convince someone to your position on downvotes, if you want to: why ought I change my behavior?
Voting is there to encourage/discourage some kinds of comments. We don’t want people to not make comments just because we disagree with their contents, so we shouldn’t downvote comments for disagreement.
If someone makes a good, well-reasoned comment in favor of a position I disagree with, that merits an upvote and a response.
It might be nice to have a mechanism for voting “agree/disagree” in addition to “high quality / low quality” (as I proposed 3 years ago), but in the absence of such a mechanism we should avoid mixing our signals.
The comments that float to the top should be the highest-quality, not the ones most in line with the Lw party line.
And people should be rewarded for making high-quality comments and punished for making low-quality comments, not rewarded for expressing popular opinions and punished for expressing unpopular opinions.
I agree that good, well-reasoned comments don’t merit downvotes, even if I disagree with the position they support. I agree that merely unpopular opinions don’t merit downvotes. I agree that low-quality comments in line with the LW party line don’t merit upvotes. I agree that merely popular opinions don’t merit upvotes. I agree that voting is there to encourage and discourage some kinds of comments.
What’s your position on downvoting a neither-spectacularly-well-or-poorly-written comment expressing an idea that’s simply false?
I don’t think that type of comment should be downvoted except when the author can’t take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won’t prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
I agree with User:saturn.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness. Stupidity (the tendency toward unwitting self-destruction) is what we’re trying to avoid here, right? We’re trying to avoid losing. Willful ignorance of the truth is an especially damaging form of stupidity.
Two highly intelligent people will not likely come to a completely different and antithetical viewpoint if both are reasonably intelligent. Thus, the very well-written but false viewpoint is far more damaging than the clearly stupid false viewpoint. If this site helps people avoid damaging their property (their brain, their bodies, their material possessions), or minimizes systemic damage to those things, then it’s more highly functional, and the value is apparent even to casual observers.
Such a value is sure to be adopted and become “market standard.” That seems like the best possible outcome, to me.
So, if a comment is seemingly very well-reasoned, but false, it will actually help to expand irrationality. Moreover, it’s more costly to address the idea, because it “seems legit.” Thus, to not sound like a jerk, you have to expend energy on politeness and form that could normally be spent on addressing substance.
HIV tricks the body into believing it’s harmless by continually changing and “living to fight another day.” If it was a more obvious threat, it would be identified and killed. I’d rather have a sudden flu that makes me clearly sick, but that my body successfully kills, than HIV that allows me to seem fine, but slowly kills me in 10 years. The well-worded but false argument is like a virus that slips past your body’s defenses or neutralizes them. That’s worse than a clearly dangerous poison because it isn’t obviously dangerous.
False ideas are most dangerous when they seem to be true. Moreover, such ideas won’t seem to be true to smart people. It’s enough for them to seem true to 51% of voters.
If 51% of voters can’t find fault with a false idea, it can be as damaging as “the state should own and control all property.” Result: millions murdered (and we still dare not talk about it, lest we be accused of being “mind killed” or “rooting for team A to the detriment of team B”—as if avoiding mass murder weren’t enough of a reason for rooting for a properly-identified “right team”).
Now, what if there’s a reasonable disagreement, from people who know differen things? Then evidence should be presented, and the final winner should become clear, or a vital area where further study is needed can be identified.
If reality is objective, but humans are highly subjective creatures due to limited brain (neocortex) size, then argument is a good way to make progress toward a Lesswrong site that exhibits emergent intelligence.
I think that’s a good way to use the site. I would prefer to have my interactions with this site lead me to undiscovered truths. If absolutely everyone here believes in the “zero universes” theory, then I’ll watch more “Google tech talks” and read more white papers on the subject, allocating more of my time to comprehending it. If everyone here says it’s a toss-up between that and the multiverse theory, or “NOTA.,” I might allocate my time to an entirely different and “more likely to yield results” subject.
In any case, there is an objective reality that all of us share “common ground” with. Thus, false arguments that appear well reasoned are always poorly-reasoned, to some extent. They are always a combination of thousands of variables. Upranking or downranking is a means for indicating which variables we think are more important, and which ones we think are true or false.
The goal should always be an optimal outcome, including an optimal prioritization.
If you have the best recipe ever for a stevia-sweetened milkshake, and your argument is true, valid, good, and I make the milkshake and I think it’s the best thing ever, and it contains other healthy ingredients that I think will help me live longer, then that’s a rational goal. I’m drinking something tasty, and living longer, etc. However, if I downvote a comment because I don’t want Lesswrong to turn into a recipe-posting board, that might be more rational.
What’s the greatest purpose to which a tool can be used? True, I can use my pistol to hammer in nails, but if I do that, and I eventually need a pistol to defend my life, I might not have it, due to years of abuse or “sub-optimal use.” Also, if I survive attacks against me, I can buy a hammer.
A Lesswrong “upvote” contains an approximation of all of that. Truth, utility, optimality, prioritization, importance, relevance to community, etc. Truth is a kind of utility. If we didn’t care about utility, we might discuss purely provincial interests. However: Lesswrong is interested in eliminating bad thinking, and it thus makes sense to start with the worst of thinking around which there is the least “wiggle room.”
If I have facial hair (or am gay), Ayn Rand followers might not like me. Ayn Rand often defended capitalism. By choosing to distance herself from people over their facial hair, she failed to prioritize her views rationally, and to perceive how others would shape her views into a cult through their extended lack of proper prioritization. So, in some ways, Rand, (like the still worse Reagan) helped to delegitimize capitalism. Still, if you read what she wrote about capitalism, she was 100% right, and if you read what she wrote about facial hair, she was 100% superficial and doltish. So, on an Ayn Rand forum, if someone begins defending Rand’s disapproval of facial hair, I might point out that in 2006 the USA experienced a systemic shock to its fiat currency system, and try to direct the conversation to more important matters.
I might also suggest leaving the discussions of facial hair to Western wear discussion boards.
It’s vital to ALWAYS include an indication of how important a subject is. That’s how marketplaces of ideas focus their trading.
Well, to the extent of its net destructiveness… that is, the difference between the destructiveness of the idea as it manifests in the specific comment, and the destructiveness of downvoting it.
But with that caveat, sure, I expect that’s true.
That said, the net destructiveness of most of the false ideas I see here is pretty low, so this isn’t a rule that is often relevant to my voting behavior. Other considerations generally swamp it.
That said, I have to admit I did not read this comment all the way through. Were it not a response to me, which I make a habit of not voting on, I would have downvoted it for its incoherent wall-of-text nature.
I think the norm is pretty strong. I tend to downvote for stupid, not just wrong. But it will need to be explicitly reinforced.
Edit: The norm on the site is also different if you are participating in the conversation (try not to downvote at all) or simply observing.
To call “don’t downvote if I’m in the conversation” a local norm might be overstating the case. I’ve heard several people assert this about their own behavior, and there are good reasons for it (and equally good reasons for not upvoting if I’m in the conversation), but my own position is more “distrust the impulse to vote on something I’m emotionally engaged with.”
I like that, and I think I’ll use something like that in the guidelines.
To echo Alejandro1, downvotes should also go to comments which break the rules.
― Robert A. Heinlein
(There’s no way to break the rule on posting too fast. That’s one I’d break. Because yeah, we ought not to be able to come close to thinking as fast as our hands can type. What a shame that would be. …Or can a well-filtered internet forum—which prides itself on being well-filtered—have “too much information”)
Downvoted for fallacy of gray, and because I’m feeling ornery today.
There’s no fallacy of gray in there. Since votes count just as much in the thread, and our votes will be much more noisy, it would often be best to refrain from voting there. If anything, I might have expected to be accused of the opposite fallacy.
This qualification makes it not the fallacy of gray. If that qualifier was implicit from context above, I simply missed it.
I still don’t see how that would relate to the fallacy of gray: