I like the idea of a political theory thread, but before I do it, I think it’s worthwhile to think about some ground rules in order for it to be productive.
Arguments still aren’t soldiers. Being mindkilled is still bad.
Read posts charitably, even if you intend to steelman
Don’t say “Your position requires you to kick puppies” unless you genuinely believe the poster is unaware of that fact.
What happens in Political Theory Open Thread stays in Political Theory Open Thread. Edit: In short, beware the halo effect.
Any other points I should add (particularly about voting/karma)?
Edit:
Distrust your impulse to vote on something. Particularly if you are emotionally engaged. Politics is the mindkiller.
Extreme contrarianism for its own sake is probably not valuable.
“Arguments are soldiers” is practically the definition of democracy. In theory, if my arguments are persuasive enough it will determine whether or not my neighbors or I can continue doing X or start doing Y without being fined, jailed, or killed for it. Depending on what great things I like to do or what horrible things I want to prevent my neighbors from doing, that’s an awfully powerful incentive for me to risk a few minds being killed.
Now, in practice we mostly live in near-megaperson cities in multi-megaperson districts of near-gigaperson countries, whereas my above theory mostly applies to hectoperson and kiloperson tribes. But my ape brain can’t quite internalize that, so the subconscious incentive remains.
But that’s not even the worst of it! I try to read a range of liberal, conservative, libertarian, populist etc. news and commentary, just so that the gaps in each don’t overlap so much… but it requires a conscious effort. Judging by the groupthink in reader comments on these sites, most people’s behavior is the opposite of mine. Why not? Reading about how right you are is fun; reading about how wrong you are is not.
It would be very easy for new would-be LessWrong readers to see the politics threads, jump to conclusions like “Oh, these people think they’re so smart but they’re actually a bunch of Blues! A wise Green like me should look elsewhere for rationality.” Repeat for a few years and the average LessWrong biases really do start to skew Blue, even bad Blue-associated ideas start going unchallenged, etc.
I think I would still love to read what LessWrong users have to say about politics. Probably on a different site. With unconnected karma and preferably unconnected pseudonyms.
“Arguments are soldiers” is practically the definition of democracy.
Respectfully, that’s not a correct use of the metaphor. The point is that unwillingness to disagree with other positions simply because those positions reach the desired conclusion is evidence of being mindkilled. You don’t shoot soldiers on your side, but for those thinking rationally, arguments are not soldiers, so bad ideas should always be challenged.
It would be very easy for new would-be LessWrong readers to see the politics threads, jump to conclusions like “Oh, these people think they’re so smart but they’re actually a bunch of Blues! A wise Green like me should look elsewhere for rationality.” Repeat for a few years and the average LessWrong biases really do start to skew Blue, even bad Blue-associated ideas start going unchallenged, etc.
This is a real risk, but it’s worth assessing (and figuring out how to assess) how likely it is to occur.
By “thinking rationally”, you must mean epistemically, not instrumentally.
If (to use as Less-Wrong-politically-neutral an allegory as I can) you are vastly outnumbered by citizens who are wondering if maybe those birds were an omen telling us that Jupiter doesn’t want heretics thrown to the lions anymore, I agree that the epistemically rational thing to do is point out that we don’t have much evidence for the efficacy of augury or the existence of Zeus, but the instrumentally rational thing to do is to smile, nod, and point out that eagles are well-known to convey the most urgent of omens. In more poetic words: you don’t shoot soldiers on your side.
The metaphor seems to be as correct as any mere metaphor can get. Is it such a stretch to call an argument a “soldier” for you when it’s responsible for helping defend your life, liberty, or property?
First, that’s not the metaphor we were discussing. Second, the metaphor you are using allows arguments to be soldiers of any ideology, not simply democracy.
I have read “Politics is the mindkiller” and am discussing the same metaphor. For that matter, I’m practically recapitulating the same metaphor, to make an even stronger point: not only can politics provoke irrational impulses to support poor arguments on your “side”, politics can create instrumentally rational incentives to (publicly, visibly, not internally) support poor arguments. Sometimes you support a morally dubious soldier because of jingoism, sometimes you support him because he’s the best defense in between you and an even worse soldier.
Would you be more specific about how you think my use of the metaphor is different and/or invalid?
I do think I’ve given a compelling counterexample to “bad ideas should always be [publicly] challenged”. (my apologies if the implicit [publicly] here was not your intended claim, but the context is that of a proposed public discussion) Have you changed your mind about that claim, or do you see a problem with my reasoning? For that matter, in my hypothetical political forum would you be arguing for atheism or for more compassionate augury yourself?
The preposition of your second sentence suggests a miscommunication of my initial claim. I didn’t intend to say “arguments are soldiers of democracy”, but rather “arguments are soldiers in a democracy”. You’re still right that this also applies to non-democracies: in any state where public opinion affects political policy, incentives exist to try and steer opinion towards instrumentally rational ends even if this is done via epistemically irrational means. Unlimited democracy is just an abstract maximum of this effect, not the only case where it applies.
In brief, I think my interpretation is right because it is consistent with the intended lesson, which is “Don’t talk about Politics on LessWrong.” In other words, I understood the point of the story to be that treating arguments as soldiers interferes with believing true things.
I agree that “bad ideas should be publicly challenged” is only true if what I’m trying to do is believe true theories and not believe false theories. If I’m trying to change society (i.e. do politics), I shouldn’t antagonize my allies. The risk is that I will go from disingenuously defending my allies’ wrong claims to sincerely believing my allies’ wrong claims, even in the face of the evidence. That’s being mindkilled. In short, engaging in the coalition-building necessary to do politics is claimed to cause belief in empirically false things. I.e. “Politics is the Mindkiller.”
My interpretation could be summarized in similar fashion as “really, really, don’t talk about politics on LessWrong”—whether this is “consistent” or not depends on your definition of that word.
I agree with your interpretation of the point of the story… and with pretty much everything else you wrote in this comment, which I guess leaves me with little else to say.
Although, that’s an example of another issue with political forums, isn’t it? In an academic setting, if a speaker elicits informed agreement from the audience about their subject, that means we’ve all got more shared foundational material with which to build the discussion of a closely related subsequent topic. Difficult questions without obvious unanimous answers do get reached eventually, but only after enough simpler related problems have been solved to make the hard questions tractable.
Politics instead turns into debates, where discussions shut down once agreement occurs, then derail onto the less tractable topics where disagreement is most heated. Where would we be if Newton had decided “Yeah, Kepler’s laws seem accurate; let me just write “me too” and then we’re on to weather prediction!”
In short, engaging in the coalition-building necessary to do politics is claimed to cause belief in empirically false things. I.e. “Politics is the Mindkiller.”
To me, this just shows that a ban on political argumentation is the very last thing that Lesswrong needs. The accusation of being “mind-killed” is levied by those whose minds are too emotionally dysfunctional for them to tell the difference between abolition and slave ownership (after all, one is blue and the other is green, and there couldn’t very well be an objective reason for either side holding their position, could there?).
The ability to stifle debate with an ad hominem and a karmic downgrade is the mark of a totalitarian (objectively unintelligent) forum, not a democratic (more intelligent than totalitarian) one. Now a libertarian and democratic forum with smart filters? That’s smarter still. In hindsight, everyone agrees so, but in present scenarios, many people are corrupted or uneducated, and lack comprehension.
This is one of the primary reason posts are labeled as mind-killed—because those posts are actually higher-level comprehension and what people don’t understand, they often attempt to destroy (especially where force is involved—people generally hate to be held accountable for possessing evil beliefs, and politics is the domain of force).
Every argument I make, no matter how seemingly mind-killed it is, is always able to be defended by direct appeal to the evidence. Many people don’t understand the evidence, though, or they deny it. Evidence that places sociopaths and their conformists on the wrong side of morality will always be fought, tooth and nail. To test this out, tell your entire family that they’re all thieves, no better than the Nazis who watched train cars of Jews go by in the distance, at your next Thanksgiving meal. (Don’t actually try this. LOL.)
Still, at some point, there was a family gathering prior to Nazi Germany, where all hope hadn’t been lost, and someone told their family that they should all buy rifles and join the resistance. That person was right. He was reported, sent to prison, and murdered by the prevailing “consensus view.” …So the Warsaw Jews had to figure it out later, and resist with a far smaller chance of success.
As John Ross wrote in “Unintended Consequences” if you wait to stand up for what’s right until you’re 98 pounds and being herded onto a cattle car with only the clothes on your back, it’s too late for you to have a chance at winning. You need to deploy soldiers when you’ll be hooted down for deploying soldiers. And, you need to be certain you’re in the right, while deploying soldiers.
The best thing possible is to make sure that your soldiers are defending something defensible at its core. The best way to do this is to quickly show that such soldiers are not in the wrong, and clearly aren’t in the wrong. If you’re defending Democrats, Republicans, most Libertarians, Greens, or Constitution Party candidates, you have a difficult row to hoe if this is your goal.
Far less difficult is an issue-based stance, and philosophical stance, on any given political subject. So yes, soldiers can be deployed, and here at LW, one would ideally wish to distance oneself from identification with bad arguments or poor defenses of an idea. …So just refrain from up-voting it. Not difficult.
Of course, once someone is tarred with “bad Karma” that’s a scarlet letter that prevents anything useful from that account from ever being considered—an ad hominem attack on all ideas from that account, no matter how valid they are.
If you cannot speak without insulting your audience, you probably aren’t going to convince anyone.
This comment would be much better, therefore, without the insults — the “emotional dysfunction”, the “totalitarian (objectively unintelligent)”, the “corrupted or uneducated”, the “sociopaths and their conformists”, and so on, and so on, ad nauseam.
Still, at some point, there was a family gathering prior to Nazi Germany, where all hope hadn’t been lost, and someone told their family that they should all buy rifles and join the resistance. That person was right.
No, he was wrong. The right thing to buy was tickets overseas.
You need to deploy soldiers when you’ll be hooted down for deploying soldiers. And, you need to be certain you’re in the right, while deploying soldiers.
I see a certain… tension between these two sentences.
Are some ideologies more objectively correct than others? (Abolitionists used ostracism and violence to prevail against those who would return fugitive slaves south. Up until the point of violence, many of their arguments were “soldiers.” One such “soldier” was Spooner’s “The Unconstitutionality of Slavery”—from the same man who later wrote “the Constitution of No Authority.” He personally believed that the Constitution had no authority, but since it was revered by many conformists, he used a reference to it to show them that they should alter their position to support of abolitionism. Good for him!)
If some ideologies are more correct than others, then those arguments which are actually soldiers for those ideologies have strategic utility, but only as strategic “talking points,” “soldiers,” or “sticky” memes. Then, everyone who agrees with using those soldiers can identify them as such (strategy), and decide whether it’s a good strategic or philosophical, argument, or both, or neither.
You seem to have excluded a middle option, namely “I am in favor of heretics not being thrown to the lions, and no amount of bird-related omen interpretation will sway my opinion on the subject one way or another.”
Here on Lesswrong, I’d favor such an argument. However, What happens when you look at a giant crowd of people with their bird masks on, and all of them are looking at you for an answer, and they’re about to throw the heretic to the lions, because they lack moral consciences of their own? It’s hard to argue against a “dishonest” strategic argument that still allows the heretic to live, when logic is out-gunned. Even so, I think that such a thing could be stated here, especially with an alias, in case you’re called for jury duty in the future and want to “Survive Voir Dire.”
...This is an old political question. There are a lot of people who were forced to answer it in times when right and wrong suddenly came into clear focus because it became “life or death.” Anne Frank is hiding in the attic: you have to be “dishonest” to the Nazis who are looking for her. In that case, dishonesty is not only “legitimate” it’s the ONLY moral course of action. If you tell the truth to the Nazis, you are then morally reprehensible. You are morally reprehensible if you don’t even lie convincingly.
Here’s another example where the status quo is morally wrong, and (narrow, short-term, non-systemic, low-hierarchical-level) dishonesty is the only morally acceptable pathway: A fugitive slave has escaped, and is being pursued by Southern bounty-hunters and also Northern judges, cops, and prosecutors. He can be forcibly returned on your ex parte testimony, and you’ll even get reward money. Yet, if you don’t make up a lie, you’re an immoral part of a system of slavery, and an intellectual coward.
Here’s an example that is less clear to the bootlickers and tyrant-conformists among the Lesswrong crowd: You’re called for jury duty. The judge is trying to stack the jury full of people who will agree to “apply the law as he gives it to them.” Since the other veniremen are simpletons who have no curiosity about the system they live under that goes beyond the platitudes they learned in their government school, the judge is likely to succeed. You however, are an adherent to the philosophy of Eliezer Yudkowsky, and you have read about jury rights on a severely down-ranked “mind-killed” comment at Lesswrong. You know the defendant’s only hope is someone who knows that the judge is legally allowed to lie to the jury, the same way the police are, by bad Supreme Court precedent. You know that the victimless crime defendant’s only hope is an independent thinker who will get seated on the jury and then refuse to convict. The defendant will be sent to a rape room for 20 years, to have his young life stolen from him, and have his hopes and dreams destroyed, if you fail to answer the “voir dire” questions like the other conformists, and fail to get seated. So, you get seated, and then, knowing that you are superior in power to the judge once seated, you vote to acquit, exercising your right to nullify the evil laws the defendant is charged with breaking.
All three of the prior lessons reference the same principle: lying to an illegitimate system is proper and moral. Yet, the powerful status quo derides this course of action as “immoral.” Thus, it is the domain of proper philosophy to address the issue, and provide guidance to those who lack emotional intelligence.
(Ideally, if Lesswrongers are actually “less wrong” about a subject, the uprank and downrank features could begin to indicate real political intelligence, or how closely one’s argument mirrors reality, or a viable philosophical position. --regardless of whether advocates in one direction or another are “mind-killed” or not. Even a “brain-dead” or “mind-killed” person can say 2+2=4. So maybe that truth doesn’t get upvoted, because it’s obvious. It’s still true. And in times of universal deceit, telling the truth is a revolutionary act.)
Ultimately, political arguments decide policy. Policy will then decide which innocents will live or die, and whether those innocents will be killed by you, or defended by you.
That’s what politics is. That most people lack any kind of a political philosophy and simply “root for their color” is a tangential aside that has now superseded the legitimacy of the debate.
I prefer to have arguments act as soldiers, because that’s still preferable to actual soldiers acting as soldiers. That’s still debate. We’re all adults here. My feelings won’t be hurt when this is downvoted into oblivion and I need to create another profile in order to down-vote somebody’s stupid (unwittingly self-destructive) comment.
Which, by the way, should be the criterion for judging all political arguments: what is the predicted outcome? What is the utility? What is the moral course of action based on a common moral standard? How do the good guys win?
Good guys: abolitionists, allies in WWII, people who sheltered Shin Dong-Hyuk and didn’t report him to the secret police in his Escape from Camp 14, the Warsaw ghetto uprising’s marksmen (not the ones who tried to inform on them, or who counseled putting faith in “god”)
Worthless: The people hooting down debate as “mind-killing,” those who counseled faith in god in the warsaw ghetto, the people who turn anti-government meetings into prayer sessions, those who gave up their friends to avoid being killed by the KGB, etc., those who suggest silencing political debate about ending the drug war because “it’s a downer” (as much of a “downer” as living 14 years or more behind bars like Gary Fannon? --you callous, uncaring pukes!)
Bad guys: the slave owners, the plantation owners who politically opposed abolition, the Nazis, the KGB, the teacher who beat the little girl to death for hiding a few kernels of corn in her pocket inside North Korea’s Camp 14, those who want the drug war to continue because they profit from it, people with a lot of private property who vote for the state to control all private property, etc.
Being dim witted and shutting down debate is not being the opposite of mind-killed. It’s not being philosophical. It’s being brain-dead far worse than being mind-killed—it’s being “inanimate to begin with,” or “still-born.”
That would be a good comeback for those accused of being “mind-killed.” “Tell me how I’m mindlessly taking a side of an irrational argument, or bear the true appellation of ‘still-born’ or philosophically absent, follower, conformist.”
And isn’t that the most damning charge anyway? “Conformist.” Someone who adopts a philosophical position without any reason, simply because there’s safety in numbers, and someone in authority gave the command. Big strong men who don’t dare to defend a logical principle, physically brave, but intellectually weak. …The core of the Nuremburg defense, which was universally ruled illegitimate by western philosophy, law, and civilization.
I suspect that the real fear on this board is that narrow logic divorced from reality is no longer adequate to defend one’s reputation as a thinker.
“Going with the flow” might work in an uncorrupted, civilized regime. Now, show me one! This is really why people don’t want to have to reference reality. Reality implies a bare minimum standard in terms of moral responsibility, and that’s the most terrifying idea known to the majority of men and women, worldwide.
How else do you explain the very low moral standards and corresponding bad results of the majority?
The majority are conformists, guided by power-seeking sociopaths. This isn’t just a fringe theory, it’s a truth referenced by all great political thinkers. To deny this omnipresent truth is to indicate an internal problem with moral comprehension, or basic philosophy.
Those who want to kill or punish anyone should be highly suspect, and a natural question follows: would that be retaliatory force, or “preemptive” force? There is always a path to the truth for those who know how to ask the right questions. Rather than point at someone like Donald Sutherland in “Invasion of the Body Snatchers” while typing out “mind-killed,” perhaps it would make sense learning a little bit of economics, law, and libertarian philosophy, and asking some questions about it to try to see where it’s fundamentally mistaken. The same goes for supporting arguments of a political position.
I’m always willing to tell you why I think I’m right, and offer evidence for it that meets you on your own terms and your own comprehension of reality, and individual facts and evidence within it. I can drill down as far as anyone wishes to go.
What I can’t do is respond in any meaningful way to a crowd of people yelling “mind-killed” as a thrown bottle bounces off my lectern and my mic is turned off. That’s what Karma does to political conversations. It lets those who feel intelligent kill the debate, and kill the emergence of the Lesswrong cybernetic mind.
“Political tags — such as royalist, communist, democrat, populist, fascist, liberal, conservative, and so forth — are never basic criteria. The human race divides politically into those who want people to be controlled and those who have no such desire. The former are idealists acting from highest motives for the greatest good of the greatest number. The latter are surly curmudgeons, suspicious and lacking in altruism. But they are more comfortable neighbors than the other sort.”
― Robert A. Heinlein
“Delusions are often functional. A mother’s opinions about her children’s beauty, intelligence, goodness, et cetera ad nauseam, keep her from drowning them at birth.”
― Robert A. Heinlein, Time Enough for Love
“Goodness without wisdom always accomplishes evil.”
Seconded on the different site, unconnected karma and unconnected pseudonyms. Also, it’d be nice if it could somehow be somewhat dissociated from LW… might be useful to have a link to it easily visible, actually, but if there is one it should be right next to a specification explaining the idea and linking to “politics is the mind-killer”.
Separately, the idea of retaining a taboo on things like discussing politicians or the like, and restricting it to mostly issue discussions, also sounds useful.
That’s an awesome idea. Maybe amend it to “downvote spam, otherwise vote everything toward 0” so a minority of politically-motivated voters can’t spoil the game for everyone else?
In addition to my other comment, I think it will be hard to enforce a voting norm that is so inconsistent with the voting norms on the rest of the site.
Disagree, there are successful instances of using karma in ways inconsistent with the rest of the site.
The most important counterexample here is Will Newsome’s Irrationality Game post, where voting norms were reversed: the weirdest/most irrational beliefs were upvoted the most, and the most sensible/agreeable beliefs were downvoted into invisibility. Many of the comments in that thread, especially the highest-voted, have disclaimers indicating that they operate according to a different voting metric. There is no obvious indication that anyone was confused or malicious with regard to the changed local norm.
Mm. I sometimes upvote for things I think are good ideas, as an efficient alternative to a comment saying “Yes, that’s right.” I sometimes downvote for things I think are bad ideas, as an alternative to a comment saying “Nope, that’s wrong.” While I would agree that in the latter case a downvote isn’t as good as a more detailed comment explaining why something is wrong, I do think it’s better than nothing.
So, consider this an opportunity to convince someone to your position on downvotes, if you want to: why ought I change my behavior?
Voting is there to encourage/discourage some kinds of comments. We don’t want people to not make comments just because we disagree with their contents, so we shouldn’t downvote comments for disagreement.
If someone makes a good, well-reasoned comment in favor of a position I disagree with, that merits an upvote and a response.
It might be nice to have a mechanism for voting “agree/disagree” in addition to “high quality / low quality” (as I proposed 3 years ago), but in the absence of such a mechanism we should avoid mixing our signals.
The comments that float to the top should be the highest-quality, not the ones most in line with the Lw party line.
And people should be rewarded for making high-quality comments and punished for making low-quality comments, not rewarded for expressing popular opinions and punished for expressing unpopular opinions.
I agree that good, well-reasoned comments don’t merit downvotes, even if I disagree with the position they support. I agree that merely unpopular opinions don’t merit downvotes. I agree that low-quality comments in line with the LW party line don’t merit upvotes. I agree that merely popular opinions don’t merit upvotes. I agree that voting is there to encourage and discourage some kinds of comments.
What’s your position on downvoting a neither-spectacularly-well-or-poorly-written comment expressing an idea that’s simply false?
I don’t think that type of comment should be downvoted except when the author can’t take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won’t prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness. Stupidity (the tendency toward unwitting self-destruction) is what we’re trying to avoid here, right? We’re trying to avoid losing. Willful ignorance of the truth is an especially damaging form of stupidity.
Two highly intelligent people will not likely come to a completely different and antithetical viewpoint if both are reasonably intelligent. Thus, the very well-written but false viewpoint is far more damaging than the clearly stupid false viewpoint. If this site helps people avoid damaging their property (their brain, their bodies, their material possessions), or minimizes systemic damage to those things, then it’s more highly functional, and the value is apparent even to casual observers.
Such a value is sure to be adopted and become “market standard.” That seems like the best possible outcome, to me.
So, if a comment is seemingly very well-reasoned, but false, it will actually help to expand irrationality. Moreover, it’s more costly to address the idea, because it “seems legit.” Thus, to not sound like a jerk, you have to expend energy on politeness and form that could normally be spent on addressing substance.
HIV tricks the body into believing it’s harmless by continually changing and “living to fight another day.” If it was a more obvious threat, it would be identified and killed. I’d rather have a sudden flu that makes me clearly sick, but that my body successfully kills, than HIV that allows me to seem fine, but slowly kills me in 10 years. The well-worded but false argument is like a virus that slips past your body’s defenses or neutralizes them. That’s worse than a clearly dangerous poison because it isn’t obviously dangerous.
False ideas are most dangerous when they seem to be true. Moreover, such ideas won’t seem to be true to smart people. It’s enough for them to seem true to 51% of voters.
If 51% of voters can’t find fault with a false idea, it can be as damaging as “the state should own and control all property.” Result: millions murdered (and we still dare not talk about it, lest we be accused of being “mind killed” or “rooting for team A to the detriment of team B”—as if avoiding mass murder weren’t enough of a reason for rooting for a properly-identified “right team”).
Now, what if there’s a reasonable disagreement, from people who know differen things? Then evidence should be presented, and the final winner should become clear, or a vital area where further study is needed can be identified.
If reality is objective, but humans are highly subjective creatures due to limited brain (neocortex) size, then argument is a good way to make progress toward a Lesswrong site that exhibits emergent intelligence.
I think that’s a good way to use the site. I would prefer to have my interactions with this site lead me to undiscovered truths. If absolutely everyone here believes in the “zero universes” theory, then I’ll watch more “Google tech talks” and read more white papers on the subject, allocating more of my time to comprehending it. If everyone here says it’s a toss-up between that and the multiverse theory, or “NOTA.,” I might allocate my time to an entirely different and “more likely to yield results” subject.
In any case, there is an objective reality that all of us share “common ground” with. Thus, false arguments that appear well reasoned are always poorly-reasoned, to some extent. They are always a combination of thousands of variables. Upranking or downranking is a means for indicating which variables we think are more important, and which ones we think are true or false.
The goal should always be an optimal outcome, including an optimal prioritization.
If you have the best recipe ever for a stevia-sweetened milkshake, and your argument is true, valid, good, and I make the milkshake and I think it’s the best thing ever, and it contains other healthy ingredients that I think will help me live longer, then that’s a rational goal. I’m drinking something tasty, and living longer, etc. However, if I downvote a comment because I don’t want Lesswrong to turn into a recipe-posting board, that might be more rational.
What’s the greatest purpose to which a tool can be used? True, I can use my pistol to hammer in nails, but if I do that, and I eventually need a pistol to defend my life, I might not have it, due to years of abuse or “sub-optimal use.” Also, if I survive attacks against me, I can buy a hammer.
A Lesswrong “upvote” contains an approximation of all of that. Truth, utility, optimality, prioritization, importance, relevance to community, etc. Truth is a kind of utility. If we didn’t care about utility, we might discuss purely provincial interests. However: Lesswrong is interested in eliminating bad thinking, and it thus makes sense to start with the worst of thinking around which there is the least “wiggle room.”
If I have facial hair (or am gay), Ayn Rand followers might not like me. Ayn Rand often defended capitalism. By choosing to distance herself from people over their facial hair, she failed to prioritize her views rationally, and to perceive how others would shape her views into a cult through their extended lack of proper prioritization. So, in some ways, Rand, (like the still worse Reagan) helped to delegitimize capitalism. Still, if you read what she wrote about capitalism, she was 100% right, and if you read what she wrote about facial hair, she was 100% superficial and doltish. So, on an Ayn Rand forum, if someone begins defending Rand’s disapproval of facial hair, I might point out that in 2006 the USA experienced a systemic shock to its fiat currency system, and try to direct the conversation to more important matters.
I might also suggest leaving the discussions of facial hair to Western wear discussion boards.
It’s vital to ALWAYS include an indication of how important a subject is. That’s how marketplaces of ideas focus their trading.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness.
Well, to the extent of its net destructiveness… that is, the difference between the destructiveness of the idea as it manifests in the specific comment, and the destructiveness of downvoting it.
But with that caveat, sure, I expect that’s true.
That said, the net destructiveness of most of the false ideas I see here is pretty low, so this isn’t a rule that is often relevant to my voting behavior. Other considerations generally swamp it.
That said, I have to admit I did not read this comment all the way through. Were it not a response to me, which I make a habit of not voting on, I would have downvoted it for its incoherent wall-of-text nature.
To call “don’t downvote if I’m in the conversation” a local norm might be overstating the case. I’ve heard several people assert this about their own behavior, and there are good reasons for it (and equally good reasons for not upvoting if I’m in the conversation), but my own position is more “distrust the impulse to vote on something I’m emotionally engaged with.”
“I am free, no matter what rules surround me. If I find them tolerable, I tolerate them; if I find them too obnoxious, I break them. I am free because I know that I alone am morally responsible for everything I do.”
― Robert A. Heinlein
(There’s no way to break the rule on posting too fast. That’s one I’d break. Because yeah, we ought not to be able to come close to thinking as fast as our hands can type. What a shame that would be. …Or can a well-filtered internet forum—which prides itself on being well-filtered—have “too much information”)
There’s no fallacy of gray in there. Since votes count just as much in the thread, and our votes will be much more noisy, it would often be best to refrain from voting there. If anything, I might have expected to be accused of the opposite fallacy.
I’m not sure that’s a help for biased voting patterns (which would probably come from the views being expressed), but it might help preventing local mind-killing from spilling out onto the rest of the site.
But I don’t think there’s an easy mechanism for that, and comments will still show up in ‘recent comments’ under discussion.
If your forum has a lot of smart people, and they read the recommended readings, then the more people who participate in the forum, the smarter the forum will be. If the forum doesn’t have a lot of stupid, belligerent rules that make participation difficult, then it will attract people who like to post. If those people aren’t discouraged from posting, but are discouraged from posting stupid things, your forum will trend toward intelligence (law of large numbers, emergence with many brains) and away from being an echo chamber (law of small numbers, emergence with few brains).
I wouldn’t stay up late at night worrying about how to get people to up-vote or down-vote things. They won’t listen anyway, but even so, they might contain a significant amount of the wisdom found in “the Sequences,” and wisdom from other places, too. They might even contain wisdom from the personal experiences of people on the blue and green teams, who then can contribute to the experiential wisdom of the Lesswrong crowd, even without being philosophically-aware participants, and even with their comments being disdained and down-voted.
If your forum has a lot of smart people, and they read the recommended readings, then the more people who participate in the forum, the smarter the forum will be.
If the forum can be said to have an intelligence which is equal to the sum of its parts, or even just some additive function of its parts, then yes. But this is not reliably the case; agents within a group can produce antagonistic effects on each others’ output, leading to the group collectively being “dumber” than its individual members.
If those people aren’t discouraged from posting, but are discouraged from posting stupid things, your forum will trend toward intelligence (law of large numbers, emergence with many brains) and away from being an echo chamber (law of small numbers, emergence with few brains).
This is true in much the same sense that it’s true that you can effectively govern a country by encouraging the populace to contribute to social institutions and discouraging antisocial behavior. It might be true in a theoretical sense, but it’s too vague to be meaningful as a prescription let alone useful, and a system which implements those goals perfectly may not even be possible.
I like the idea of a political theory thread, but before I do it, I think it’s worthwhile to think about some ground rules in order for it to be productive.
Arguments still aren’t soldiers. Being mindkilled is still bad.
Read posts charitably, even if you intend to steelman
Don’t say “Your position requires you to kick puppies” unless you genuinely believe the poster is unaware of that fact.
What happens in Political Theory Open Thread stays in Political Theory Open Thread. Edit: In short, beware the halo effect.
Any other points I should add (particularly about voting/karma)?
Edit:
Distrust your impulse to vote on something. Particularly if you are emotionally engaged. Politics is the mindkiller.
Extreme contrarianism for its own sake is probably not valuable.
“Arguments are soldiers” is practically the definition of democracy. In theory, if my arguments are persuasive enough it will determine whether or not my neighbors or I can continue doing X or start doing Y without being fined, jailed, or killed for it. Depending on what great things I like to do or what horrible things I want to prevent my neighbors from doing, that’s an awfully powerful incentive for me to risk a few minds being killed.
Now, in practice we mostly live in near-megaperson cities in multi-megaperson districts of near-gigaperson countries, whereas my above theory mostly applies to hectoperson and kiloperson tribes. But my ape brain can’t quite internalize that, so the subconscious incentive remains.
But that’s not even the worst of it! I try to read a range of liberal, conservative, libertarian, populist etc. news and commentary, just so that the gaps in each don’t overlap so much… but it requires a conscious effort. Judging by the groupthink in reader comments on these sites, most people’s behavior is the opposite of mine. Why not? Reading about how right you are is fun; reading about how wrong you are is not.
It would be very easy for new would-be LessWrong readers to see the politics threads, jump to conclusions like “Oh, these people think they’re so smart but they’re actually a bunch of Blues! A wise Green like me should look elsewhere for rationality.” Repeat for a few years and the average LessWrong biases really do start to skew Blue, even bad Blue-associated ideas start going unchallenged, etc.
I think I would still love to read what LessWrong users have to say about politics. Probably on a different site. With unconnected karma and preferably unconnected pseudonyms.
Respectfully, that’s not a correct use of the metaphor. The point is that unwillingness to disagree with other positions simply because those positions reach the desired conclusion is evidence of being mindkilled. You don’t shoot soldiers on your side, but for those thinking rationally, arguments are not soldiers, so bad ideas should always be challenged.
This is a real risk, but it’s worth assessing (and figuring out how to assess) how likely it is to occur.
By “thinking rationally”, you must mean epistemically, not instrumentally.
If (to use as Less-Wrong-politically-neutral an allegory as I can) you are vastly outnumbered by citizens who are wondering if maybe those birds were an omen telling us that Jupiter doesn’t want heretics thrown to the lions anymore, I agree that the epistemically rational thing to do is point out that we don’t have much evidence for the efficacy of augury or the existence of Zeus, but the instrumentally rational thing to do is to smile, nod, and point out that eagles are well-known to convey the most urgent of omens. In more poetic words: you don’t shoot soldiers on your side.
The metaphor seems to be as correct as any mere metaphor can get. Is it such a stretch to call an argument a “soldier” for you when it’s responsible for helping defend your life, liberty, or property?
First, that’s not the metaphor we were discussing. Second, the metaphor you are using allows arguments to be soldiers of any ideology, not simply democracy.
I have read “Politics is the mindkiller” and am discussing the same metaphor. For that matter, I’m practically recapitulating the same metaphor, to make an even stronger point: not only can politics provoke irrational impulses to support poor arguments on your “side”, politics can create instrumentally rational incentives to (publicly, visibly, not internally) support poor arguments. Sometimes you support a morally dubious soldier because of jingoism, sometimes you support him because he’s the best defense in between you and an even worse soldier.
Would you be more specific about how you think my use of the metaphor is different and/or invalid?
I do think I’ve given a compelling counterexample to “bad ideas should always be [publicly] challenged”. (my apologies if the implicit [publicly] here was not your intended claim, but the context is that of a proposed public discussion) Have you changed your mind about that claim, or do you see a problem with my reasoning? For that matter, in my hypothetical political forum would you be arguing for atheism or for more compassionate augury yourself?
The preposition of your second sentence suggests a miscommunication of my initial claim. I didn’t intend to say “arguments are soldiers of democracy”, but rather “arguments are soldiers in a democracy”. You’re still right that this also applies to non-democracies: in any state where public opinion affects political policy, incentives exist to try and steer opinion towards instrumentally rational ends even if this is done via epistemically irrational means. Unlimited democracy is just an abstract maximum of this effect, not the only case where it applies.
In brief, I think my interpretation is right because it is consistent with the intended lesson, which is “Don’t talk about Politics on LessWrong.” In other words, I understood the point of the story to be that treating arguments as soldiers interferes with believing true things.
I agree that “bad ideas should be publicly challenged” is only true if what I’m trying to do is believe true theories and not believe false theories. If I’m trying to change society (i.e. do politics), I shouldn’t antagonize my allies. The risk is that I will go from disingenuously defending my allies’ wrong claims to sincerely believing my allies’ wrong claims, even in the face of the evidence. That’s being mindkilled. In short, engaging in the coalition-building necessary to do politics is claimed to cause belief in empirically false things. I.e. “Politics is the Mindkiller.”
My interpretation could be summarized in similar fashion as “really, really, don’t talk about politics on LessWrong”—whether this is “consistent” or not depends on your definition of that word.
I agree with your interpretation of the point of the story… and with pretty much everything else you wrote in this comment, which I guess leaves me with little else to say.
Although, that’s an example of another issue with political forums, isn’t it? In an academic setting, if a speaker elicits informed agreement from the audience about their subject, that means we’ve all got more shared foundational material with which to build the discussion of a closely related subsequent topic. Difficult questions without obvious unanimous answers do get reached eventually, but only after enough simpler related problems have been solved to make the hard questions tractable.
Politics instead turns into debates, where discussions shut down once agreement occurs, then derail onto the less tractable topics where disagreement is most heated. Where would we be if Newton had decided “Yeah, Kepler’s laws seem accurate; let me just write “me too” and then we’re on to weather prediction!”
To me, this just shows that a ban on political argumentation is the very last thing that Lesswrong needs. The accusation of being “mind-killed” is levied by those whose minds are too emotionally dysfunctional for them to tell the difference between abolition and slave ownership (after all, one is blue and the other is green, and there couldn’t very well be an objective reason for either side holding their position, could there?).
The ability to stifle debate with an ad hominem and a karmic downgrade is the mark of a totalitarian (objectively unintelligent) forum, not a democratic (more intelligent than totalitarian) one. Now a libertarian and democratic forum with smart filters? That’s smarter still. In hindsight, everyone agrees so, but in present scenarios, many people are corrupted or uneducated, and lack comprehension.
This is one of the primary reason posts are labeled as mind-killed—because those posts are actually higher-level comprehension and what people don’t understand, they often attempt to destroy (especially where force is involved—people generally hate to be held accountable for possessing evil beliefs, and politics is the domain of force).
Every argument I make, no matter how seemingly mind-killed it is, is always able to be defended by direct appeal to the evidence. Many people don’t understand the evidence, though, or they deny it. Evidence that places sociopaths and their conformists on the wrong side of morality will always be fought, tooth and nail. To test this out, tell your entire family that they’re all thieves, no better than the Nazis who watched train cars of Jews go by in the distance, at your next Thanksgiving meal. (Don’t actually try this. LOL.)
Still, at some point, there was a family gathering prior to Nazi Germany, where all hope hadn’t been lost, and someone told their family that they should all buy rifles and join the resistance. That person was right. He was reported, sent to prison, and murdered by the prevailing “consensus view.” …So the Warsaw Jews had to figure it out later, and resist with a far smaller chance of success.
As John Ross wrote in “Unintended Consequences” if you wait to stand up for what’s right until you’re 98 pounds and being herded onto a cattle car with only the clothes on your back, it’s too late for you to have a chance at winning. You need to deploy soldiers when you’ll be hooted down for deploying soldiers. And, you need to be certain you’re in the right, while deploying soldiers.
The best thing possible is to make sure that your soldiers are defending something defensible at its core. The best way to do this is to quickly show that such soldiers are not in the wrong, and clearly aren’t in the wrong. If you’re defending Democrats, Republicans, most Libertarians, Greens, or Constitution Party candidates, you have a difficult row to hoe if this is your goal.
Far less difficult is an issue-based stance, and philosophical stance, on any given political subject. So yes, soldiers can be deployed, and here at LW, one would ideally wish to distance oneself from identification with bad arguments or poor defenses of an idea. …So just refrain from up-voting it. Not difficult.
Of course, once someone is tarred with “bad Karma” that’s a scarlet letter that prevents anything useful from that account from ever being considered—an ad hominem attack on all ideas from that account, no matter how valid they are.
If you cannot speak without insulting your audience, you probably aren’t going to convince anyone.
This comment would be much better, therefore, without the insults — the “emotional dysfunction”, the “totalitarian (objectively unintelligent)”, the “corrupted or uneducated”, the “sociopaths and their conformists”, and so on, and so on, ad nauseam.
No, he was wrong. The right thing to buy was tickets overseas.
I see a certain… tension between these two sentences.
Are some ideologies more objectively correct than others? (Abolitionists used ostracism and violence to prevail against those who would return fugitive slaves south. Up until the point of violence, many of their arguments were “soldiers.” One such “soldier” was Spooner’s “The Unconstitutionality of Slavery”—from the same man who later wrote “the Constitution of No Authority.” He personally believed that the Constitution had no authority, but since it was revered by many conformists, he used a reference to it to show them that they should alter their position to support of abolitionism. Good for him!)
If some ideologies are more correct than others, then those arguments which are actually soldiers for those ideologies have strategic utility, but only as strategic “talking points,” “soldiers,” or “sticky” memes. Then, everyone who agrees with using those soldiers can identify them as such (strategy), and decide whether it’s a good strategic or philosophical, argument, or both, or neither.
You seem to have excluded a middle option, namely “I am in favor of heretics not being thrown to the lions, and no amount of bird-related omen interpretation will sway my opinion on the subject one way or another.”
Here on Lesswrong, I’d favor such an argument. However, What happens when you look at a giant crowd of people with their bird masks on, and all of them are looking at you for an answer, and they’re about to throw the heretic to the lions, because they lack moral consciences of their own? It’s hard to argue against a “dishonest” strategic argument that still allows the heretic to live, when logic is out-gunned. Even so, I think that such a thing could be stated here, especially with an alias, in case you’re called for jury duty in the future and want to “Survive Voir Dire.”
...This is an old political question. There are a lot of people who were forced to answer it in times when right and wrong suddenly came into clear focus because it became “life or death.” Anne Frank is hiding in the attic: you have to be “dishonest” to the Nazis who are looking for her. In that case, dishonesty is not only “legitimate” it’s the ONLY moral course of action. If you tell the truth to the Nazis, you are then morally reprehensible. You are morally reprehensible if you don’t even lie convincingly.
Here’s another example where the status quo is morally wrong, and (narrow, short-term, non-systemic, low-hierarchical-level) dishonesty is the only morally acceptable pathway: A fugitive slave has escaped, and is being pursued by Southern bounty-hunters and also Northern judges, cops, and prosecutors. He can be forcibly returned on your ex parte testimony, and you’ll even get reward money. Yet, if you don’t make up a lie, you’re an immoral part of a system of slavery, and an intellectual coward.
Here’s an example that is less clear to the bootlickers and tyrant-conformists among the Lesswrong crowd: You’re called for jury duty. The judge is trying to stack the jury full of people who will agree to “apply the law as he gives it to them.” Since the other veniremen are simpletons who have no curiosity about the system they live under that goes beyond the platitudes they learned in their government school, the judge is likely to succeed. You however, are an adherent to the philosophy of Eliezer Yudkowsky, and you have read about jury rights on a severely down-ranked “mind-killed” comment at Lesswrong. You know the defendant’s only hope is someone who knows that the judge is legally allowed to lie to the jury, the same way the police are, by bad Supreme Court precedent. You know that the victimless crime defendant’s only hope is an independent thinker who will get seated on the jury and then refuse to convict. The defendant will be sent to a rape room for 20 years, to have his young life stolen from him, and have his hopes and dreams destroyed, if you fail to answer the “voir dire” questions like the other conformists, and fail to get seated. So, you get seated, and then, knowing that you are superior in power to the judge once seated, you vote to acquit, exercising your right to nullify the evil laws the defendant is charged with breaking.
All three of the prior lessons reference the same principle: lying to an illegitimate system is proper and moral. Yet, the powerful status quo derides this course of action as “immoral.” Thus, it is the domain of proper philosophy to address the issue, and provide guidance to those who lack emotional intelligence.
(Ideally, if Lesswrongers are actually “less wrong” about a subject, the uprank and downrank features could begin to indicate real political intelligence, or how closely one’s argument mirrors reality, or a viable philosophical position. --regardless of whether advocates in one direction or another are “mind-killed” or not. Even a “brain-dead” or “mind-killed” person can say 2+2=4. So maybe that truth doesn’t get upvoted, because it’s obvious. It’s still true. And in times of universal deceit, telling the truth is a revolutionary act.)
Ultimately, political arguments decide policy. Policy will then decide which innocents will live or die, and whether those innocents will be killed by you, or defended by you.
That’s what politics is. That most people lack any kind of a political philosophy and simply “root for their color” is a tangential aside that has now superseded the legitimacy of the debate.
I prefer to have arguments act as soldiers, because that’s still preferable to actual soldiers acting as soldiers. That’s still debate. We’re all adults here. My feelings won’t be hurt when this is downvoted into oblivion and I need to create another profile in order to down-vote somebody’s stupid (unwittingly self-destructive) comment.
Which, by the way, should be the criterion for judging all political arguments: what is the predicted outcome? What is the utility? What is the moral course of action based on a common moral standard? How do the good guys win?
Good guys: abolitionists, allies in WWII, people who sheltered Shin Dong-Hyuk and didn’t report him to the secret police in his Escape from Camp 14, the Warsaw ghetto uprising’s marksmen (not the ones who tried to inform on them, or who counseled putting faith in “god”)
Worthless: The people hooting down debate as “mind-killing,” those who counseled faith in god in the warsaw ghetto, the people who turn anti-government meetings into prayer sessions, those who gave up their friends to avoid being killed by the KGB, etc., those who suggest silencing political debate about ending the drug war because “it’s a downer” (as much of a “downer” as living 14 years or more behind bars like Gary Fannon? --you callous, uncaring pukes!)
Bad guys: the slave owners, the plantation owners who politically opposed abolition, the Nazis, the KGB, the teacher who beat the little girl to death for hiding a few kernels of corn in her pocket inside North Korea’s Camp 14, those who want the drug war to continue because they profit from it, people with a lot of private property who vote for the state to control all private property, etc.
Being dim witted and shutting down debate is not being the opposite of mind-killed. It’s not being philosophical. It’s being brain-dead far worse than being mind-killed—it’s being “inanimate to begin with,” or “still-born.”
That would be a good comeback for those accused of being “mind-killed.” “Tell me how I’m mindlessly taking a side of an irrational argument, or bear the true appellation of ‘still-born’ or philosophically absent, follower, conformist.”
And isn’t that the most damning charge anyway? “Conformist.” Someone who adopts a philosophical position without any reason, simply because there’s safety in numbers, and someone in authority gave the command. Big strong men who don’t dare to defend a logical principle, physically brave, but intellectually weak. …The core of the Nuremburg defense, which was universally ruled illegitimate by western philosophy, law, and civilization.
I suspect that the real fear on this board is that narrow logic divorced from reality is no longer adequate to defend one’s reputation as a thinker.
“Going with the flow” might work in an uncorrupted, civilized regime. Now, show me one! This is really why people don’t want to have to reference reality. Reality implies a bare minimum standard in terms of moral responsibility, and that’s the most terrifying idea known to the majority of men and women, worldwide.
How else do you explain the very low moral standards and corresponding bad results of the majority?
The majority are conformists, guided by power-seeking sociopaths. This isn’t just a fringe theory, it’s a truth referenced by all great political thinkers. To deny this omnipresent truth is to indicate an internal problem with moral comprehension, or basic philosophy.
Those who want to kill or punish anyone should be highly suspect, and a natural question follows: would that be retaliatory force, or “preemptive” force? There is always a path to the truth for those who know how to ask the right questions. Rather than point at someone like Donald Sutherland in “Invasion of the Body Snatchers” while typing out “mind-killed,” perhaps it would make sense learning a little bit of economics, law, and libertarian philosophy, and asking some questions about it to try to see where it’s fundamentally mistaken. The same goes for supporting arguments of a political position.
I’m always willing to tell you why I think I’m right, and offer evidence for it that meets you on your own terms and your own comprehension of reality, and individual facts and evidence within it. I can drill down as far as anyone wishes to go.
What I can’t do is respond in any meaningful way to a crowd of people yelling “mind-killed” as a thrown bottle bounces off my lectern and my mic is turned off. That’s what Karma does to political conversations. It lets those who feel intelligent kill the debate, and kill the emergence of the Lesswrong cybernetic mind.
― Robert A. Heinlein
― Robert A. Heinlein, Time Enough for Love
― Robert A. Heinlein
You haven’t said much besides “I’m right and you’re wrong”.
I am wary of people with black-and-white minds.
I don’t read about how I am wrong. I only read about how other people (sometimes including my former selves) are wrong, and that’s fun too.
Seconded on the different site, unconnected karma and unconnected pseudonyms. Also, it’d be nice if it could somehow be somewhat dissociated from LW… might be useful to have a link to it easily visible, actually, but if there is one it should be right next to a specification explaining the idea and linking to “politics is the mind-killer”.
Separately, the idea of retaining a taboo on things like discussing politicians or the like, and restricting it to mostly issue discussions, also sounds useful.
Downvote spam, but otherwise avoid voting up or down—we’re likely to be voting for biased reasons.
That’s an awesome idea. Maybe amend it to “downvote spam, otherwise vote everything toward 0” so a minority of politically-motivated voters can’t spoil the game for everyone else?
In addition to my other comment, I think it will be hard to enforce a voting norm that is so inconsistent with the voting norms on the rest of the site.
Disagree, there are successful instances of using karma in ways inconsistent with the rest of the site.
The most important counterexample here is Will Newsome’s Irrationality Game post, where voting norms were reversed: the weirdest/most irrational beliefs were upvoted the most, and the most sensible/agreeable beliefs were downvoted into invisibility. Many of the comments in that thread, especially the highest-voted, have disclaimers indicating that they operate according to a different voting metric. There is no obvious indication that anyone was confused or malicious with regard to the changed local norm.
Hmm. I like the idea that expressing an idea well is rewarded, which your suggestion doesn’t allow. Trying to figure out how to decide between them.
Hmm. How about:
Spam is not engagement, but the poster whose posting led to this discussion post was not really interested in a discussion.
Sounds good. Has a side-effect of there being a perceived cost for posting in the thread; you’re more likely to be downvoted.
I generally counsel not downvoting for disagreement anywhere on the site. I think this needs to be stronger.
Mm. I sometimes upvote for things I think are good ideas, as an efficient alternative to a comment saying “Yes, that’s right.” I sometimes downvote for things I think are bad ideas, as an alternative to a comment saying “Nope, that’s wrong.” While I would agree that in the latter case a downvote isn’t as good as a more detailed comment explaining why something is wrong, I do think it’s better than nothing.
So, consider this an opportunity to convince someone to your position on downvotes, if you want to: why ought I change my behavior?
Voting is there to encourage/discourage some kinds of comments. We don’t want people to not make comments just because we disagree with their contents, so we shouldn’t downvote comments for disagreement.
If someone makes a good, well-reasoned comment in favor of a position I disagree with, that merits an upvote and a response.
It might be nice to have a mechanism for voting “agree/disagree” in addition to “high quality / low quality” (as I proposed 3 years ago), but in the absence of such a mechanism we should avoid mixing our signals.
The comments that float to the top should be the highest-quality, not the ones most in line with the Lw party line.
And people should be rewarded for making high-quality comments and punished for making low-quality comments, not rewarded for expressing popular opinions and punished for expressing unpopular opinions.
I agree that good, well-reasoned comments don’t merit downvotes, even if I disagree with the position they support. I agree that merely unpopular opinions don’t merit downvotes. I agree that low-quality comments in line with the LW party line don’t merit upvotes. I agree that merely popular opinions don’t merit upvotes. I agree that voting is there to encourage and discourage some kinds of comments.
What’s your position on downvoting a neither-spectacularly-well-or-poorly-written comment expressing an idea that’s simply false?
I don’t think that type of comment should be downvoted except when the author can’t take a hint and continues posting the same false idea repeatedly. Downvoting false ideas won’t prevent well-intentioned people from making mistakes or failing to understand things, mostly it would just discourage them from posting at all to whatever extent they are bothered by the possibility of downvotes.
I agree with User:saturn.
An idea that’s false but “spectacularly well-written” should be downvoted to the extent of its destructiveness. Stupidity (the tendency toward unwitting self-destruction) is what we’re trying to avoid here, right? We’re trying to avoid losing. Willful ignorance of the truth is an especially damaging form of stupidity.
Two highly intelligent people will not likely come to a completely different and antithetical viewpoint if both are reasonably intelligent. Thus, the very well-written but false viewpoint is far more damaging than the clearly stupid false viewpoint. If this site helps people avoid damaging their property (their brain, their bodies, their material possessions), or minimizes systemic damage to those things, then it’s more highly functional, and the value is apparent even to casual observers.
Such a value is sure to be adopted and become “market standard.” That seems like the best possible outcome, to me.
So, if a comment is seemingly very well-reasoned, but false, it will actually help to expand irrationality. Moreover, it’s more costly to address the idea, because it “seems legit.” Thus, to not sound like a jerk, you have to expend energy on politeness and form that could normally be spent on addressing substance.
HIV tricks the body into believing it’s harmless by continually changing and “living to fight another day.” If it was a more obvious threat, it would be identified and killed. I’d rather have a sudden flu that makes me clearly sick, but that my body successfully kills, than HIV that allows me to seem fine, but slowly kills me in 10 years. The well-worded but false argument is like a virus that slips past your body’s defenses or neutralizes them. That’s worse than a clearly dangerous poison because it isn’t obviously dangerous.
False ideas are most dangerous when they seem to be true. Moreover, such ideas won’t seem to be true to smart people. It’s enough for them to seem true to 51% of voters.
If 51% of voters can’t find fault with a false idea, it can be as damaging as “the state should own and control all property.” Result: millions murdered (and we still dare not talk about it, lest we be accused of being “mind killed” or “rooting for team A to the detriment of team B”—as if avoiding mass murder weren’t enough of a reason for rooting for a properly-identified “right team”).
Now, what if there’s a reasonable disagreement, from people who know differen things? Then evidence should be presented, and the final winner should become clear, or a vital area where further study is needed can be identified.
If reality is objective, but humans are highly subjective creatures due to limited brain (neocortex) size, then argument is a good way to make progress toward a Lesswrong site that exhibits emergent intelligence.
I think that’s a good way to use the site. I would prefer to have my interactions with this site lead me to undiscovered truths. If absolutely everyone here believes in the “zero universes” theory, then I’ll watch more “Google tech talks” and read more white papers on the subject, allocating more of my time to comprehending it. If everyone here says it’s a toss-up between that and the multiverse theory, or “NOTA.,” I might allocate my time to an entirely different and “more likely to yield results” subject.
In any case, there is an objective reality that all of us share “common ground” with. Thus, false arguments that appear well reasoned are always poorly-reasoned, to some extent. They are always a combination of thousands of variables. Upranking or downranking is a means for indicating which variables we think are more important, and which ones we think are true or false.
The goal should always be an optimal outcome, including an optimal prioritization.
If you have the best recipe ever for a stevia-sweetened milkshake, and your argument is true, valid, good, and I make the milkshake and I think it’s the best thing ever, and it contains other healthy ingredients that I think will help me live longer, then that’s a rational goal. I’m drinking something tasty, and living longer, etc. However, if I downvote a comment because I don’t want Lesswrong to turn into a recipe-posting board, that might be more rational.
What’s the greatest purpose to which a tool can be used? True, I can use my pistol to hammer in nails, but if I do that, and I eventually need a pistol to defend my life, I might not have it, due to years of abuse or “sub-optimal use.” Also, if I survive attacks against me, I can buy a hammer.
A Lesswrong “upvote” contains an approximation of all of that. Truth, utility, optimality, prioritization, importance, relevance to community, etc. Truth is a kind of utility. If we didn’t care about utility, we might discuss purely provincial interests. However: Lesswrong is interested in eliminating bad thinking, and it thus makes sense to start with the worst of thinking around which there is the least “wiggle room.”
If I have facial hair (or am gay), Ayn Rand followers might not like me. Ayn Rand often defended capitalism. By choosing to distance herself from people over their facial hair, she failed to prioritize her views rationally, and to perceive how others would shape her views into a cult through their extended lack of proper prioritization. So, in some ways, Rand, (like the still worse Reagan) helped to delegitimize capitalism. Still, if you read what she wrote about capitalism, she was 100% right, and if you read what she wrote about facial hair, she was 100% superficial and doltish. So, on an Ayn Rand forum, if someone begins defending Rand’s disapproval of facial hair, I might point out that in 2006 the USA experienced a systemic shock to its fiat currency system, and try to direct the conversation to more important matters.
I might also suggest leaving the discussions of facial hair to Western wear discussion boards.
It’s vital to ALWAYS include an indication of how important a subject is. That’s how marketplaces of ideas focus their trading.
Well, to the extent of its net destructiveness… that is, the difference between the destructiveness of the idea as it manifests in the specific comment, and the destructiveness of downvoting it.
But with that caveat, sure, I expect that’s true.
That said, the net destructiveness of most of the false ideas I see here is pretty low, so this isn’t a rule that is often relevant to my voting behavior. Other considerations generally swamp it.
That said, I have to admit I did not read this comment all the way through. Were it not a response to me, which I make a habit of not voting on, I would have downvoted it for its incoherent wall-of-text nature.
I think the norm is pretty strong. I tend to downvote for stupid, not just wrong. But it will need to be explicitly reinforced.
Edit: The norm on the site is also different if you are participating in the conversation (try not to downvote at all) or simply observing.
To call “don’t downvote if I’m in the conversation” a local norm might be overstating the case. I’ve heard several people assert this about their own behavior, and there are good reasons for it (and equally good reasons for not upvoting if I’m in the conversation), but my own position is more “distrust the impulse to vote on something I’m emotionally engaged with.”
I like that, and I think I’ll use something like that in the guidelines.
To echo Alejandro1, downvotes should also go to comments which break the rules.
― Robert A. Heinlein
(There’s no way to break the rule on posting too fast. That’s one I’d break. Because yeah, we ought not to be able to come close to thinking as fast as our hands can type. What a shame that would be. …Or can a well-filtered internet forum—which prides itself on being well-filtered—have “too much information”)
Downvoted for fallacy of gray, and because I’m feeling ornery today.
There’s no fallacy of gray in there. Since votes count just as much in the thread, and our votes will be much more noisy, it would often be best to refrain from voting there. If anything, I might have expected to be accused of the opposite fallacy.
This qualification makes it not the fallacy of gray. If that qualifier was implicit from context above, I simply missed it.
I still don’t see how that would relate to the fallacy of gray:
Perhaps a norm of using the anti-kibitzer for the thread?
I’m not sure that’s a help for biased voting patterns (which would probably come from the views being expressed), but it might help preventing local mind-killing from spilling out onto the rest of the site.
But I don’t think there’s an easy mechanism for that, and comments will still show up in ‘recent comments’ under discussion.
If your forum has a lot of smart people, and they read the recommended readings, then the more people who participate in the forum, the smarter the forum will be. If the forum doesn’t have a lot of stupid, belligerent rules that make participation difficult, then it will attract people who like to post. If those people aren’t discouraged from posting, but are discouraged from posting stupid things, your forum will trend toward intelligence (law of large numbers, emergence with many brains) and away from being an echo chamber (law of small numbers, emergence with few brains).
I wouldn’t stay up late at night worrying about how to get people to up-vote or down-vote things. They won’t listen anyway, but even so, they might contain a significant amount of the wisdom found in “the Sequences,” and wisdom from other places, too. They might even contain wisdom from the personal experiences of people on the blue and green teams, who then can contribute to the experiential wisdom of the Lesswrong crowd, even without being philosophically-aware participants, and even with their comments being disdained and down-voted.
If the forum can be said to have an intelligence which is equal to the sum of its parts, or even just some additive function of its parts, then yes. But this is not reliably the case; agents within a group can produce antagonistic effects on each others’ output, leading to the group collectively being “dumber” than its individual members.
This is true in much the same sense that it’s true that you can effectively govern a country by encouraging the populace to contribute to social institutions and discouraging antisocial behavior. It might be true in a theoretical sense, but it’s too vague to be meaningful as a prescription let alone useful, and a system which implements those goals perfectly may not even be possible.