I think you need to reread this article. It doesn’t go as far as you seem to think it does. I very much dobut many people on LessWrong are mindkilled by talking about markets. I mean seriously we talk economics and cognitive bias with potential political implications all the time. Indeed it would be impossible to do otherwise.
I’m not saying that I think Overcoming Bias should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.
(Now that I’ve been named as a co-moderator, I guess I’d better include a disclaimer: This article is my personal opinion, not a statement of official Overcoming Bias policy. This will always be the case unless explicitly specified otherwise.)
Looking at some of the comments citing politics as the mindkiller and comparing it to the article Eliezer wrote, this norm has clearly mutated beyond reason. It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There needs to be some counter push to this norm creep.
Biology, sociology, sexuality, and religion are near-completely dominated by political thinking, especially religion which is basically the same thing as politics. (E.g., I have serious doubts about the standard Darwinian account of complex adaptations, but I can’t talk about those doubts for the same reasons I can’t talk about my opinions on climate science.) Given what I’ve seen of your comments I’d have thought this would be obvious to you. LessWrong doesn’t seem to recognize it. I don’t care whether anti-politics norms are praised or demonized, but I do wish they were applied reflectively and consistently in any case.
I do wish they were applied reflectively and consistently in any case.
Sure, me too. While I’m at it, I have other implausible wishes I’d love to have granted.
In the meantime, I generally assume that whenever an organization has a “let’s not talk about X because that always leads to unproductive/unpleasant discussions” norm (which is usually), there’s a space of privileged positions about X implicit in the resulting conversations which cannot be challenged.
The problem, one thinks, occasionally, in the abstract of Far mode, is that some kinds of politics tend to drag our identities into them, such that if we were wrong about something, then we were the wrong person, and that is absolutely unacceptable. This does not seem to actually happen with biology or sociology so much. So I guess the REAL policy is against one of discussing politics you identify with?
The fact that “Politics is the Mind-Killer” doesn’t call for a blanket ban on political discussion doesn’t mean that a community norm against nonessential political discussion is necessarily a bad idea. Now, I would say that roystgnr’s jumping the gun a bit here—the OP’s tied pretty closely to heuristics and biases research, and avoids explicit color politics—but I’d rather we engage the norm on its own terms rather than in terms of its relation to Eliezer’s post. After all, we’re hardly bound to take Eliezer’s word as gospel.
Ok sure I can agree with this, even if I think Overcoming Bias/LessWrong used to be more interesting when we stuck to EY’s proposed norm. But come now you must know of what I’m talking about when I say:
It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There has been overreach. Worse politics as the mindkiller is now being used as a political tool.
Yeah. There are a number of meta-level concerns here that complicate the problem, but at the object level the difficulty is that our present attitude towards politics creates an unstable equilibrium: there are politically charged topics that can and should be discussed with the LW toolkit, but there’s a pretty strong tendency to go beyond those tools and into unproductive sparring, and no good way to stop it.
I’m for the mind-killer meme insofar as it provides a way to put on the brakes before discussion gets to that point. But I don’t think it’s actually very good at that, especially since there’s the potential for it to be used as a bludgeon against political viewpoints individual posters don’t agree with (those they do, of course, register as common sense rather than ideology). Banning politics altogether is one way to deal with this, hence the norm creep; and it really does need to be dealt with. But I’d like to see a better approach.
It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.
O.o
Downvoting your comment.
I think you need to reread this article. It doesn’t go as far as you seem to think it does. I very much dobut many people on LessWrong are mindkilled by talking about markets. I mean seriously we talk economics and cognitive bias with potential political implications all the time. Indeed it would be impossible to do otherwise.
Looking at some of the comments citing politics as the mindkiller and comparing it to the article Eliezer wrote, this norm has clearly mutated beyond reason. It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There needs to be some counter push to this norm creep.
Biology, sociology, sexuality, and religion are near-completely dominated by political thinking, especially religion which is basically the same thing as politics. (E.g., I have serious doubts about the standard Darwinian account of complex adaptations, but I can’t talk about those doubts for the same reasons I can’t talk about my opinions on climate science.) Given what I’ve seen of your comments I’d have thought this would be obvious to you. LessWrong doesn’t seem to recognize it. I don’t care whether anti-politics norms are praised or demonized, but I do wish they were applied reflectively and consistently in any case.
Yeah I guess you are right. At the end of the day I just want talking about markets to be OK on LessWrong.
Sorry. (^_^)
Sure, me too.
While I’m at it, I have other implausible wishes I’d love to have granted.
In the meantime, I generally assume that whenever an organization has a “let’s not talk about X because that always leads to unproductive/unpleasant discussions” norm (which is usually), there’s a space of privileged positions about X implicit in the resulting conversations which cannot be challenged.
The problem, one thinks, occasionally, in the abstract of Far mode, is that some kinds of politics tend to drag our identities into them, such that if we were wrong about something, then we were the wrong person, and that is absolutely unacceptable. This does not seem to actually happen with biology or sociology so much. So I guess the REAL policy is against one of discussing politics you identify with?
The fact that “Politics is the Mind-Killer” doesn’t call for a blanket ban on political discussion doesn’t mean that a community norm against nonessential political discussion is necessarily a bad idea. Now, I would say that roystgnr’s jumping the gun a bit here—the OP’s tied pretty closely to heuristics and biases research, and avoids explicit color politics—but I’d rather we engage the norm on its own terms rather than in terms of its relation to Eliezer’s post. After all, we’re hardly bound to take Eliezer’s word as gospel.
Ok sure I can agree with this, even if I think Overcoming Bias/LessWrong used to be more interesting when we stuck to EY’s proposed norm. But come now you must know of what I’m talking about when I say:
There has been overreach. Worse politics as the mindkiller is now being used as a political tool.
Yeah. There are a number of meta-level concerns here that complicate the problem, but at the object level the difficulty is that our present attitude towards politics creates an unstable equilibrium: there are politically charged topics that can and should be discussed with the LW toolkit, but there’s a pretty strong tendency to go beyond those tools and into unproductive sparring, and no good way to stop it.
I’m for the mind-killer meme insofar as it provides a way to put on the brakes before discussion gets to that point. But I don’t think it’s actually very good at that, especially since there’s the potential for it to be used as a bludgeon against political viewpoints individual posters don’t agree with (those they do, of course, register as common sense rather than ideology). Banning politics altogether is one way to deal with this, hence the norm creep; and it really does need to be dealt with. But I’d like to see a better approach.
I wouldn’t have thought this to be a mindkilling subject, but I am seeing evidence of it.
Politics is the mindkiller is the mindkiller.
“Politics is the mindkiller” is politics.
Yes I guess I can agree with that.
No, I meant the subject of markets. I’d think of it as a mindkilling subject IRL, but not here.
It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Or for that matter most of the sequences.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.