The “driven by agents rather than impersonal factors” is spot-on.
If your preferred system centralizes power, then when it doesn’t give you a desired result you can always defend the system by diagnosing the problem as a failure to put The Right People in charge.
But who do you blame if a market doesn’t give you a desired result? E.g., after the hurricane hits, when portable generators are going for a 500% markup, it’s hard to rationally blame “the price gougers” when that term could be expanded out as “the people who are offering you the lowest price of anyone in the world”. A hundred times as many sellers would be willing to deliver you a generator at 5000% markup, and surely any ire aimed at the former group shouldn’t spare the latter. So if you realize (even subconsciously) that you can’t get mad at everyone in the world, but you still want to get mad, you have to conclude that somehow “the system” itself (perhaps embodied by some “middleman minority” surrogate) deserves your anger.
This is highly relevant to lesswrong—markets are one of the best information aggregation mechanisms available and so understanding why people oppose markets is useful.
Understanding why people oppose markets is very useful, but I’ve already got many redundant sources of information helping me at that task. Having a forum where discussion isn’t infected by “Why do all those wrong-thinking people oppose our truth?” would also be very useful, but there I’m mostly out of luck. I can split my time between the liberal (conservatives and libertarians are so heartless!) and conservative (liberals and libertarians are so evil!) and libertarian (liberals and conservatives are so stupid!) sites instead, but a diverse selection of people talking past each other is much less valuable than a diverse selection talking to each other.
Oppose markets is useful if your are proposing some kind of institutional reform, and the traditional dynamics are not optimal. The author of the article apparently assumes less disagreement only because some groups, like intellectuals, are not sufficient rewarded.
The problem with libertarianism is that it all too often takes for granted the products of highly sophisticated system of government regulation. The typical young highly privileged person’s libertarianism is an incredibly disgusting sight. I do believe we need more harmonized free trade though. Right now some types of resources (natural resources, the processing of natural resources) is freely traded worldwide, while other type of resources (human labour on-site) is not freely traded, resulting in a situation whereby in the countries with largest per capita resource consumptions (‘developed countries’) it is cheaper to build on-site a poorly thermally insulated shack and put aircon on it, than to build better thermally insulated housing. This is rather bad for environment. Furthermore the taxes go to support the citizens of same country, not those most in need, and a non-nationalistic person can’t really support that either.
The label can still have political use even if it doesn’t have practical use.
For an example, let’s go with Wiggins, people with green eyes and black hair. Wiggins are untrustworthy, and put too much ketchup on their fries, everyone knows that. A minor political party could even sprout up in Australia on a Wiggin-centric platform. But then some statisticians raise the point that we don’t have strong evidence differentiating Wiggins from other people. What does the political party in Australia do? “If you’re not with us, you’re with the Wiggins! How can they say there’s no difference between you and a Wiggin, when we can so clearly see the difference? Remember to vote to protect Australia from the Wiggins!”
Sure, at some point reality will become inconvenient. But it takes more than mere evidential neutrality to stop the Australian Anti-Wiggin Party—all it means is people have to use their intuitions.
I think you need to reread this article. It doesn’t go as far as you seem to think it does. I very much dobut many people on LessWrong are mindkilled by talking about markets. I mean seriously we talk economics and cognitive bias with potential political implications all the time. Indeed it would be impossible to do otherwise.
I’m not saying that I think Overcoming Bias should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.
(Now that I’ve been named as a co-moderator, I guess I’d better include a disclaimer: This article is my personal opinion, not a statement of official Overcoming Bias policy. This will always be the case unless explicitly specified otherwise.)
Looking at some of the comments citing politics as the mindkiller and comparing it to the article Eliezer wrote, this norm has clearly mutated beyond reason. It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There needs to be some counter push to this norm creep.
Biology, sociology, sexuality, and religion are near-completely dominated by political thinking, especially religion which is basically the same thing as politics. (E.g., I have serious doubts about the standard Darwinian account of complex adaptations, but I can’t talk about those doubts for the same reasons I can’t talk about my opinions on climate science.) Given what I’ve seen of your comments I’d have thought this would be obvious to you. LessWrong doesn’t seem to recognize it. I don’t care whether anti-politics norms are praised or demonized, but I do wish they were applied reflectively and consistently in any case.
I do wish they were applied reflectively and consistently in any case.
Sure, me too. While I’m at it, I have other implausible wishes I’d love to have granted.
In the meantime, I generally assume that whenever an organization has a “let’s not talk about X because that always leads to unproductive/unpleasant discussions” norm (which is usually), there’s a space of privileged positions about X implicit in the resulting conversations which cannot be challenged.
The problem, one thinks, occasionally, in the abstract of Far mode, is that some kinds of politics tend to drag our identities into them, such that if we were wrong about something, then we were the wrong person, and that is absolutely unacceptable. This does not seem to actually happen with biology or sociology so much. So I guess the REAL policy is against one of discussing politics you identify with?
The fact that “Politics is the Mind-Killer” doesn’t call for a blanket ban on political discussion doesn’t mean that a community norm against nonessential political discussion is necessarily a bad idea. Now, I would say that roystgnr’s jumping the gun a bit here—the OP’s tied pretty closely to heuristics and biases research, and avoids explicit color politics—but I’d rather we engage the norm on its own terms rather than in terms of its relation to Eliezer’s post. After all, we’re hardly bound to take Eliezer’s word as gospel.
Ok sure I can agree with this, even if I think Overcoming Bias/LessWrong used to be more interesting when we stuck to EY’s proposed norm. But come now you must know of what I’m talking about when I say:
It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There has been overreach. Worse politics as the mindkiller is now being used as a political tool.
Yeah. There are a number of meta-level concerns here that complicate the problem, but at the object level the difficulty is that our present attitude towards politics creates an unstable equilibrium: there are politically charged topics that can and should be discussed with the LW toolkit, but there’s a pretty strong tendency to go beyond those tools and into unproductive sparring, and no good way to stop it.
I’m for the mind-killer meme insofar as it provides a way to put on the brakes before discussion gets to that point. But I don’t think it’s actually very good at that, especially since there’s the potential for it to be used as a bludgeon against political viewpoints individual posters don’t agree with (those they do, of course, register as common sense rather than ideology). Banning politics altogether is one way to deal with this, hence the norm creep; and it really does need to be dealt with. But I’d like to see a better approach.
It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.
Downvoting this as too political, but:
The “driven by agents rather than impersonal factors” is spot-on.
If your preferred system centralizes power, then when it doesn’t give you a desired result you can always defend the system by diagnosing the problem as a failure to put The Right People in charge.
But who do you blame if a market doesn’t give you a desired result? E.g., after the hurricane hits, when portable generators are going for a 500% markup, it’s hard to rationally blame “the price gougers” when that term could be expanded out as “the people who are offering you the lowest price of anyone in the world”. A hundred times as many sellers would be willing to deliver you a generator at 5000% markup, and surely any ire aimed at the former group shouldn’t spare the latter. So if you realize (even subconsciously) that you can’t get mad at everyone in the world, but you still want to get mad, you have to conclude that somehow “the system” itself (perhaps embodied by some “middleman minority” surrogate) deserves your anger.
This is highly relevant to lesswrong—markets are one of the best information aggregation mechanisms available and so understanding why people oppose markets is useful.
Understanding why people oppose markets is very useful, but I’ve already got many redundant sources of information helping me at that task. Having a forum where discussion isn’t infected by “Why do all those wrong-thinking people oppose our truth?” would also be very useful, but there I’m mostly out of luck. I can split my time between the liberal (conservatives and libertarians are so heartless!) and conservative (liberals and libertarians are so evil!) and libertarian (liberals and conservatives are so stupid!) sites instead, but a diverse selection of people talking past each other is much less valuable than a diverse selection talking to each other.
For example, many of those sites that should otherwise know better are incapable of discussing futurist ideas without pigeonholing them politically. For a recent example: “There is a rottenness at the heart of the transhuman project … mythology of the smugly self-satisfied hypercapitalists who have unintentionally done so much to destroy so many of the moral and interpersonal values of post-Englightenment civilization.” If Charlie Freaking Stross can no longer discuss the singularity without digressing into anti-free-market ranting, that suggests it might be particularly valuable to maintain futurist discussion forums in which economic liberals can participate without being treated like ignorant specimens in need of re-education.
Oppose markets is useful if your are proposing some kind of institutional reform, and the traditional dynamics are not optimal. The author of the article apparently assumes less disagreement only because some groups, like intellectuals, are not sufficient rewarded.
The problem with libertarianism is that it all too often takes for granted the products of highly sophisticated system of government regulation. The typical young highly privileged person’s libertarianism is an incredibly disgusting sight. I do believe we need more harmonized free trade though. Right now some types of resources (natural resources, the processing of natural resources) is freely traded worldwide, while other type of resources (human labour on-site) is not freely traded, resulting in a situation whereby in the countries with largest per capita resource consumptions (‘developed countries’) it is cheaper to build on-site a poorly thermally insulated shack and put aircon on it, than to build better thermally insulated housing. This is rather bad for environment. Furthermore the taxes go to support the citizens of same country, not those most in need, and a non-nationalistic person can’t really support that either.
Do people “oppose markets” in a way that this classification is useful? If not, a priveleged hypothesis could lead down a blind alley.
If not then it is even less political.
Walk me through that.
If it isn’t even useful to describe markets being opposed then there isn’t much of a political battle happening, is there?
The label can still have political use even if it doesn’t have practical use.
For an example, let’s go with Wiggins, people with green eyes and black hair. Wiggins are untrustworthy, and put too much ketchup on their fries, everyone knows that. A minor political party could even sprout up in Australia on a Wiggin-centric platform. But then some statisticians raise the point that we don’t have strong evidence differentiating Wiggins from other people. What does the political party in Australia do? “If you’re not with us, you’re with the Wiggins! How can they say there’s no difference between you and a Wiggin, when we can so clearly see the difference? Remember to vote to protect Australia from the Wiggins!”
Sure, at some point reality will become inconvenient. But it takes more than mere evidential neutrality to stop the Australian Anti-Wiggin Party—all it means is people have to use their intuitions.
O.o
Downvoting your comment.
I think you need to reread this article. It doesn’t go as far as you seem to think it does. I very much dobut many people on LessWrong are mindkilled by talking about markets. I mean seriously we talk economics and cognitive bias with potential political implications all the time. Indeed it would be impossible to do otherwise.
Looking at some of the comments citing politics as the mindkiller and comparing it to the article Eliezer wrote, this norm has clearly mutated beyond reason. It has now been applied to everything from biology, sociology, sexuality and even recently to religion.
There needs to be some counter push to this norm creep.
Biology, sociology, sexuality, and religion are near-completely dominated by political thinking, especially religion which is basically the same thing as politics. (E.g., I have serious doubts about the standard Darwinian account of complex adaptations, but I can’t talk about those doubts for the same reasons I can’t talk about my opinions on climate science.) Given what I’ve seen of your comments I’d have thought this would be obvious to you. LessWrong doesn’t seem to recognize it. I don’t care whether anti-politics norms are praised or demonized, but I do wish they were applied reflectively and consistently in any case.
Yeah I guess you are right. At the end of the day I just want talking about markets to be OK on LessWrong.
Sorry. (^_^)
Sure, me too.
While I’m at it, I have other implausible wishes I’d love to have granted.
In the meantime, I generally assume that whenever an organization has a “let’s not talk about X because that always leads to unproductive/unpleasant discussions” norm (which is usually), there’s a space of privileged positions about X implicit in the resulting conversations which cannot be challenged.
The problem, one thinks, occasionally, in the abstract of Far mode, is that some kinds of politics tend to drag our identities into them, such that if we were wrong about something, then we were the wrong person, and that is absolutely unacceptable. This does not seem to actually happen with biology or sociology so much. So I guess the REAL policy is against one of discussing politics you identify with?
The fact that “Politics is the Mind-Killer” doesn’t call for a blanket ban on political discussion doesn’t mean that a community norm against nonessential political discussion is necessarily a bad idea. Now, I would say that roystgnr’s jumping the gun a bit here—the OP’s tied pretty closely to heuristics and biases research, and avoids explicit color politics—but I’d rather we engage the norm on its own terms rather than in terms of its relation to Eliezer’s post. After all, we’re hardly bound to take Eliezer’s word as gospel.
Ok sure I can agree with this, even if I think Overcoming Bias/LessWrong used to be more interesting when we stuck to EY’s proposed norm. But come now you must know of what I’m talking about when I say:
There has been overreach. Worse politics as the mindkiller is now being used as a political tool.
Yeah. There are a number of meta-level concerns here that complicate the problem, but at the object level the difficulty is that our present attitude towards politics creates an unstable equilibrium: there are politically charged topics that can and should be discussed with the LW toolkit, but there’s a pretty strong tendency to go beyond those tools and into unproductive sparring, and no good way to stop it.
I’m for the mind-killer meme insofar as it provides a way to put on the brakes before discussion gets to that point. But I don’t think it’s actually very good at that, especially since there’s the potential for it to be used as a bludgeon against political viewpoints individual posters don’t agree with (those they do, of course, register as common sense rather than ideology). Banning politics altogether is one way to deal with this, hence the norm creep; and it really does need to be dealt with. But I’d like to see a better approach.
I wouldn’t have thought this to be a mindkilling subject, but I am seeing evidence of it.
Politics is the mindkiller is the mindkiller.
“Politics is the mindkiller” is politics.
Yes I guess I can agree with that.
No, I meant the subject of markets. I’d think of it as a mindkilling subject IRL, but not here.
It never was so far. Maybe we should start encouraging new posters to find good ways of coping with such feelings instead of shielding them from more and more “political” subjects?
We used to be stronger as a community.
In the real world the inability to avoid mindkilling will cripple anyone’s rationalist dojo.
A while ago I made a suggestion for a poll which interrogated LW users’ beliefs in subjects deemed to be mindkilling. There were two reasons for this: to map the overall space of ideas where it takes place, and to look for areas which turned out to be overwhelmingly one-sided.
I go to great pains to think dispassionately about things, (as, I imagine, do a lot of LWers), but there are still some subjects which I know I can’t think about objectively. More worryingly, I wonder what subjects I don’t notice I can’t think about objectively.
One warning sign is attributing disagreement with your views on a subject to “bias”, and then engaging in armchair speculation about the psychological defects that must be responsible for this bias. For an example, see the article linked in the original posting, and almost the whole of this thread.
An especially egregious variation on this theme is evolutionary psychological speculation. I speculate that people do this because in the ancestral environment your audience wouldn’t call you out on it if you came up with a fully general explanation of something and asserted it confidently as long as your audience already agreed with your conclusion.
Or for that matter most of the sequences.
Precisely. E.g. the charity diversification disagreement I ran into here several times (even before I formed an opinion on AI stuff). I said, on several occasions, that the non-diversification result in larger rewards for finding and exploiting deficiencies in charity ranking algorithms employed, I even provided a clear example from my area of expertise of how the targets respond to aiming methods. Nobody has ever even tried to refute this argument, or even state that the effect of such is not strong enough, or something. Every single time someone just posts a reference to some fallacy which is entirely irrelevant to my argument, and asserts that it is the cause of my view, repeatedly (and that typically gets upvotes). One gets more engaging discussion simply asserting that you guys are wrong (no explanation given), than providing a well defined argument, because the argument itself doesn’t make a damn difference unless it is structured in the format of ‘assert a bias’ game (whenever I get upvotes being contrarian, that’s usually really shitty arguments following the assert a bias format rather than there is such and such mechanism format)
That’s quite a sensitive test, though. I’m trying to make my views unbiased. If I succeed, someone who still exhibits a greater amount of bias will either disagree with me, or I’ll disagree with their reasoning.
Well, you might have different priors, leading to different posterior beliefs from the same data; or you might have different values, leading to different decisions or policy prescriptions from the same descriptive beliefs.
(One might expect that a person raised in a large close-knit extended working-class immigrant family might have different values regarding economics than a person raised in a small individualistic nuclear middle-class ethnic-majority family, for instance.)
Note that I said someone who is more biased in an arena will disagree with me, not that someone who disagreed with me in an arena was exhibiting more bias.
In the real world a dojo (rationalist or otherwise) that anyone can walk into at any time and join in any of the exercises without any filtering or guidance or partnering is pretty much guaranteed to end up crippled.