Every contribution starts out negative by default because it takes up space in recent comments and elsewhere, occupies the minds of LW commenters, and takes time to read. Beyond that, I admit your post caused no serious negative contributions. Combined with some other recent harmless threads, that counts against the “no politics” guideline. On the other hand, harmless violations of such guidelines can cause harmful violations in future top-level posts, and most of the harm may be in low-probability large-scale arguments, like the ones we had about gender.
I do think we keep avoiding crucial parts of the problem that are a bad idea to talk about, but that are frustrating to avoid talking about once the topic has been brought up (if only because of the sense that what has been said will be taken for a community consensus), and this frustration is probably what’s actually causing me to complain.
On the other hand, harmless violations of such guidelines can cause harmful violations in future top-level posts, and most of the harm may be in low-probability large-scale arguments, like the ones we had about gender.
Fair enough. What we’re facing here is the same ongoing conflict of visions about what the range of appropriate topics on LW should be. My opinion is that if the forum as presently constituted isn’t capable of handling sensitive topics in a rational manner, and if any topic with even the remotest sensitive implications should therefore be avoided, then the whole project should be written off as a failure and the website reconstituted along the standard guidelines for technical forums (i.e. with a list of precise and strict definitions of suitable technical topics, and rigorous moderation to eradicate off-topic comments).
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights. I’m surely not alone in this assessment, and the bad PR from such a situation should be a sufficient reason for the owners of LW to undertake some radical steps (in one direction or another) to avoid it.
I do think we keep avoiding crucial parts of the problem that are a bad idea to talk about, but that are frustrating to avoid talking about once the topic has been brought up (if only because of the sense that what has been said will be taken for a community consensus), and this frustration is probably what’s actually causing me to complain.
I’m not sure I understand what you’re saying here. Are you saying that there are some points relevant to this discussion that you’re reluctant to bring up because they are “a bad idea to talk about”?
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights.
The official motto in the logo is “refining the art of human rationality”, which implies that our rationality is still imperfect. I don’t see why it’s absurd or bad PR to say that we’re more rational than most other communities, but still not rational enough to talk about politics.
The official motto in the logo is “refining the art of human rationality”, which implies that our rationality is still imperfect.
It’s still imperfect, but can’t people try a little harder?
I don’t see why it’s absurd or bad PR to say that we’re more rational than most other communities, but still not rational enough to talk about politics.
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer’s writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the “mind-killer.”
I have to wonder, if LW isn’t ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then one day a bell will go ding, and suddenly a critical mass of us will be rational enough to discuss politics.
You can’t learn to discuss politics rationally merely by studying rationality in the abstract, or studying it when applied to non-political subjects. Rationality applied to politics is a particular skill that must be exercised. Biases will flare up even for intelligent, rationalist people who know better. The only way for LW to become good at discussing politics is to practice and get better.
(And even now, LW is not bad at discussing politics, and there have been many great political discussions here. While many of them have been a bit heated by the standards of LW, they are downright friendly compared to practically anywhere else.)
Unfortunately, the rest of the world doesn’t have the same level of humility about discussing political subjects. Many of the people most capable of discussing politics rationally seem to have the most humility. How long can we afford to have rationalists sit out of politics?
If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn’t sound appealing (to me, it doesn’t, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you’re very serious and rational about it, it’s pretty much recreation.
I used to really like politics as recreation—it made me feel good—but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you’d otherwise like; it screws with personal relationships. Three, there’s something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense of not being a good person by reassuring myself that I had the right opinions. Now I’m trying to actually be a better person in practice, and also trying to worry less about imaginary sins; it’s work in progress, of course, but I feel I don’t need my “fix” of righteous anger as much.
This is a personal experience, of course, but I think that it’s worth it for everyone to ask, “Why do I talk politics? Do I want to talk politics?”
“If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.”
You omit the most important step, which comes before starting an organization. That’s figuring out what politics this organization should espouse and how it should espouse those politics.
If my views are almost diametrically opposed to Robin Hanson’s, and I have no good reason to think I’m more rational than Robin or otherwise in a better epistemic position, I’m not rationally justified in setting up an organization to espouse my views because I should consider, in that event, that my views have at least a .5 chance of being wrong, probably much higher. The worst think people can do is set up political projects based on ill-considered principles to end up advocating the wrong policies. As long as rational, informed people disagree, one isn’t entitled to a strongly held political position.
What you said might make sense if political debate were strictly about means and there was general agreement on ends. But it is not. And your views on the ends of policy are worth every bit as much as Dr. Hanson’s, however much you worry that his thinking might be better than yours concerning means.
Just to make sure there is no confusion about who stands where on the issue, I’d like to re-emphasize that I definitely don’t support making politics a prominent item on the discussion agenda of LW. What I am concerned about are topics that are on LW’s discussion agenda as presently defined, but have some implications about political and other charged issues, and the question of whether these should be avoided. (Though of course this is complicated by the fact that the present discussion agenda is somewhat vague and a matter of some disagreement.)
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I can understand it if you’re making some point about improving rationality in general, and the best example to illustrate your point happens to be political, and you judge the benefit of using that example to be worth the cost (e.g., the risk that LW slides down the slippery slope towards politics being prominently debated, and others finding it difficult to respond to your point because they want to avoid contributing to sliding down that slippery slope).
If it’s more like “btw, here are some political implications of the idea I was talking about” then I think we should avoid those.
It could be that by far the main corruptor of rationality, which does by far the most damage however you want to measure it, is the struggle for political power. If that’s the case, then it may be unavoidable to discuss power and therefore politics.
The high point of human rationality is science, but as it happens, the scientific establishment has been so thoroughly dominated by the government (government supports much of academia, government supports much of science through grants, government passes laws which make it difficult to conduct science without official government approval, government controls the dissemination of scientific claims) that corruption of science by politics seems inevitable. If in fact science is corrupt from top to bottom (as it may be), then such corruption is almost certainly almost entirely at the hands of the state, and is therefore almost certainly political. So, if science is thoroughly corrupt, then it is almost certainly virtually impossible to discuss that corruption at all seriously without getting heavily into politics.
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I don’t think one should bring up such implications just for the hell of it, when they contribute nothing of substance. I also agree that among otherwise equally useful examples, one should use those that are least distracting and that minimize the danger of dissension. There’s a simple cost-benefit case there, which I don’t dispute. However, it seems to me that many relevant topics are impossible to discuss without bringing up such implications.
Take for example my original post that started this discussion. For anyone who strives to be less wrong about almost anything, one of the absolutely crucial questions is what confidence should be assigned to what the academic mainstream says, and in this regard, I consider the topic of the post extremely relevant for LW. (If you believe otherwise, I would be curious to see the argument why—and note that what I’m arguing now is independent of what you might think about the quality of its content.) Now, I think nobody could dispute that on many topics the academic opinion is biased to some extent due to political and ideological influences, so it’s important to be able to recognize and evaluate such situations. Moreover, as far as I see, this represents a peculiar class of bias that cannot be adequately illustrated and discussed without bringing up some concrete examples of biases due to ideological or political influences. So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
Yet some people apparently believe that this line of inquiry already goes too far towards dangerous and undesirable topics. If this belief is correct, in the sense that maintaining a high quality of discourse really demands such a severe restriction on permissible topics, then this, in my opinion, decisively defeats the idea of having a forum like LW, under any reasonable interpretation of its mission statement, vague as it is. It effectively implies that people are inherently incapable of rational discourse unless it’s stringently disciplined and focused on a narrow range of topics, the way expert technical forums are. Because this is definitely not the only example of how charged issues will inevitably be arrived upon by people discussing the general problems of sorting out truth from bias and nonsense.
There are also other important points here, on which I’ve already elaborated in my other comments, which all stem from the same fundamental observation, namely that those topics where one needs an extraordinary level of rationality to escape bias and delusion and often exactly those that are commonly a matter of impassioned and polarized opinion. In other words, general skills in rational thinking and overcoming bias are of little use if one will stick to technical topics in which experts already have sophisticated, so to say, application-specific techniques for eliminating bias and nonsense. (Which often work well—one can easily think of brilliant scientists and technical experts with outright delusional opinions outside of their narrow specialties—and when they don’t, the issue may well be impossible to analyze correctly without getting into charged topics.) But even if you disagree with my view expressed in this last paragraph, I think your question is adequately answered by the points I made before that.
So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present? I’m not sure if there are such examples that would suit your purpose, but it seems worth looking into, if you hadn’t already.
Overall I don’t think we disagree much. We both think whether to bring up political implications is a matter of cost-benefit analysis and we seem to largely agree on what count as costs and what as benefits. I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas. (If you’re satisfied that you’ve already taken into account such biases, then that’s good enough for me.)
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present?
Trouble is, the present system that produces reputable and accredited science and scholarship is a rather novel creation. Things worked very differently as recently as two or three generations ago, and I believe that an accurate general model for assessing its soundness on various issues necessarily has to incorporate judgments about some contemporary polarized and charged topics, which have no historical precedent that would be safely remote from present-day controversies. As Constant wrote in another reply to your above comment, modern science is so deeply intertwined with the modern system of government that it’s impossible to accurately analyze one without asking any questions about the other.
And to emphasize this important point again, I believe that coming up with such a model is a matter of supreme importance to anyone who wants to have correct views on almost any topic outside of one’s own narrow areas of expertise. Our society is historically unique in that we have these vast institutions whose mission is to produce and publish accurate insight on all imaginable topics, and for anyone intellectually curious, the skill of assessing the quality of their output is as important as recognizing edible from poisonous fruit for a forager.
I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas.
That is surely a valid concern, and I probably display this bias myself at least occasionally. Like most biases, however, it also has its mirror image, i.e. the bias to avoid questions for fear of stirring up controversy, which one should also watch for.
This is not only because excessive caution means avoiding topics that would in fact be worth pursuing, but also because of a more subtle problem. Namely, the set of all questions relevant for a topic may include some safe and innocent ones alongside other more polarizing and charged ones. Deciding to include only the former into one’s assessment and ignoring the latter for fear of controversy may in fact fatally bias one’s final conclusions. I have seen instances of posts and articles on LW that, in my opinion, suffer from this exact problem.
As far as I know, nobody cares what LessWrong commenters think about political issues. LessWrong should concentrate on less crowded topics where it potentially has actual influence, like AI risks.
Do you (pl.) think it would be valuable to have a discussion topic on whether political discussion could be fruitful (possibly with links to relevant discussions, etc.)?
(Not to say “take it elsewhere”, but rather, “should we have this discussion somewhere it’ll be easier to keep track of”.)
Merely saying that there are topics too inflammatory even for LW is one thing, but remember that the context of my remark was a discussion of whether topics should be avoided even if they have only indirect implications about something that might inflame passions. The level of caution that some people seem to believe should be exercised would in my opinion, if really necessary, constitute evidence against the supposedly high level of rationality on LW. (And on many people, the contradiction would also have a bad PR effect.)
Please also see my above reply to Vladimir Nesov in which I elaborate on this further.
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights.
Fallacy of gray. Nobody is perfectly rational, but that doesn’t make all people equally rational. Also, you used the inflammatory and imprecise “boasting” characterization.
While not relying on helpful techniques is a good way of signaling ability, it’s a bad way of boosting performance. The virtue of humility is in taking every precaution even if all seems fine already, or even if the situation looks hopeless.
On the practical question, I think eliminating politics was an inspired decision that should continue to be followed, and I think the lead article was not political; I also think it’s the best post in a good while. Nevertheless, I find the fact that we must avoid politics troubling. If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence. This is a nice empirical test of the value and possibility of becoming more rational by the methods we employ, a perspective we should consider an empirical question. It’s a shame we can’t conduct this test.
I will be very impressed if we can get Aumann agreement on hot political issues.
I suspect that the result on many of them would be convergence to realizing that we don’t know what the best solution is, but that might be my prejudices talking.
Supposing that what this site does successfully improves rationality among its participants, then we should expect that someone like me who has only been here for a few months would be less rational than the folks who have been around for years and benefiting from the site.
But a discussion of politics here would not exclude me, so even in that scenario we would expect such a discussion not to lead to convergence.
The proper empirical test, I suppose, would be to identify cohorts based on their tenure here, and conduct a series of such conversations within each such cohort—say, once a year—and evaluate whether a given cohort comes closer to convergence from year to year.
If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.
Fallacy of gray. Nobody is perfectly rational, but that doesn’t make all people equally rational.
It would be such a fallacy if I had claimed that one must either reach absolute perfection in this regard or admit being no better than others. In reality, however, I claimed that people who have to avoid any discussion at all that has even indirect and remote implications about sensitive topics for fear of discourse breaking down have no grounds for claiming to be somehow more “rational” than others (controlling of course for variables like intelligence and real-life accomplishment).
Also, you used the inflammatory and imprecise “boasting” characterization.
In retrospect, yes, I should have expressed myself more diplomatically. Also, I didn’t mean to imply that everyone or even a large part of the participants behave like that. However, it is not at all rare to see people on LW making remarks about “rationality” whose self-congratulatory aspect is, if not explicit, not too terribly subtle either. This, in my opinion, is bad PR already because qualifying oneself is low status as a general principle, and a combination of such statements with an admission of inability to maintain the quality of discourse about all but the most innocent topics gives the whole thing a tinge of absurdity. That, at least, is my honest impression of how many people are going to see these things, with clear implications on the PR issues.
On the other hand, if the quality of discourse outside of technical topics really cannot be maintained, then the clear solution is to formulate a strict policy for what’s considered on-topic, and enforce it rigorously. That would not only make things function much better, but it would also be excellent from a PR perspective. (Rather than giving off a bad “we can’t handle sensitive topics” impression, it would give off a high status “we don’t want to be bothered with irrelevancies” impression.)
people who have to avoid any discussion at all that has even indirect and remote implications about sensitive topics for fear of discourse breaking down have no grounds for claiming to be somehow more “rational” than others
Maybe they do, maybe they don’t, but you didn’t ask. Basically you infer a conclusion here, and claim that no proof to the contrary is therefore possible.
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights.
When you have made this argument before, I responded:
There are some people here who I would trust to have rational discussions about the policy decisions that politics is supposedly about, and which candidates are likely to implement which policies and which tradeoff is better. My expectation if they tried to have that discussion on this public internet site is that they would draw attention and participation of less skilled members who would drag the discussion down into typical mind killing politics, and probably draw new people to Less Wrong who are not so interested in rationality and getting the right answer as joining in the tribal political argument.
It seems inappropiate to me for you to repeat this argument without addressing my response.
Please pardon my lack of response to your argument—back in that thread the volume of replies to my comments became too large for me to respond to all of them. Better late than never, though, so here is my response.
I certainly don’t think constant discussions of everyday politics on LW would be interesting or desirable. Someone who wants to do that has countless other places on the internet, tailored to all possible opinions and tastes, and there is absolutely no need to clutter up LW with it. However, what we’re debating is at the other extreme, namely whether there should be a strict censorship (voluntary or not) of all discussions that have even remote implications in politics and other topics that are likely to inflame passions.
I think the answer is no, for several reasons. First, there are interesting questions relevant for issues at the core of LW’s mission statement that inevitably touch on sensitive topics. Second, for some potentially sensitive questions I find extremely interesting (and surely not just I), LW really is a venue where it’s possible to get a uniquely cool-headed and rational analysis, so avoiding those would mean forsaking some of the forum’s greatest potentials. Finally, as I’ve already mentioned, the idea of a self-congratulatory “rationalist” community that in fact suffers from the same problems as any other place whenever it comes to sensitive topics is comically bad PR for whatever causes LW is associated with.
Of course, it may be that LW is not capable of handling sensitive topics after all. But then, in my opinion, the present way it’s constituted doesn’t make much sense, and it would benefit from a reorganization that would impose much more precisely defined topic requirements and enforce them rigorously.
You seem to be restating your position, without actually addressing my point that a policy that takes into account the likely behaviours of LW members of various levels of skill and experience, including those who have recently joined, does not reflect on the capabilities of the experienced, high level members.
If you can not address this point, you should stop repeating your argument that such rational people should be able to handle political discussion.
You seem to be restating your position, without actually addressing my point that a policy that takes into account the likely behaviours of LW members of various levels of skill and experience, including those who have recently joined, does not reflect on the capabilities of the experienced, high level members.
I don’t see how this objection is specific to sensitive topics. Assuming that regular participants maintain high enough standards, incompetent attempts by newbies to comment on sensitive topics should be effectively discouraged by downvoting, as in all other debates. Even in the most innocent technical discussions, things will go downhill if there is no mechanism in place to discourage unproductive and poorly thought out comments. In either case, if the voting system is ineffective, it means that more stringent moderation is in order.
On the other hand, if even the behavior of regular participants is problematic, then we get back to the problems I was writing about.
In innocent technical discussions, users will generally base their votes only on the merits of the comments they’re voting on. In sensitive political discussions, some will vote based on ideological agreement.
A problem common to both cases is that LessWrong is hesitant to vote anything down below zero, possibly for good morale-related reasons.
I’m not necessarily advocating complete censorship. Special cautionary reminders around political topics and disciplined downvoting might do the trick.
I don’t see evidence for bad PR here. I haven’t seen anyone cite the politics taboo as a reason to shun LessWrong, and in general it isn’t unusual for sites to have rules like this. While it would certainly be embarrassing if the average LessWrong commenter weren’t at least a little more rational than the average internet commenter, productive political discussion between internet commenters not pre-selected for agreement is a notoriously hard problem.
If you’re worried about bad PR, I suspect there’s a better case that bad PR will be caused by LessWrong arriving at conclusions that are true but disreputable.
Are you saying that there are some points relevant to this discussion that you’re reluctant to bring up because they are “a bad idea to talk about”?
Could someone point me to where the politics taboo is actually articulated? After re-reading Eliezer’s post politics is the mindkiller, he identifies many of the pitfalls of discussing gender politics, but I never got the sense that he advocated prohibiting discussion of controversial political subjects:
I’m not saying that I think Overcoming Bias should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.
If you’re worried about bad PR, I suspect there’s a better case that bad PR will be caused by LessWrong arriving at conclusions that are true but disreputable.
That is indeed a good point. Still, I do think my original concern is valid too.
In any case, given the opinions exchanged in this discussion (and other similar ones), I do believe that LW is in need of a clearer official policy for what is considered on-topic. I find commenting here a lot of fun, and what I write is usually well received as far as the votes and replies appear to indicate, but occasional comments like yours leave me with an unpleasant impression that a significant number of people might strongly disapprove of my attitudes and choices of topics. I certainly have no desire to do anything that breeds ill will, but lacking clearer rules, it seems to me that this conflict (assuming it’s significant) is without an obvious resolution, unless we are to treat any complaint as a liberum veto (which I don’t think would be workable as a general principle).
Sure.
Well, you have sure whetted my curiosity with that. I honestly don’t see anything in the post and the subsequent comments that warrants such grave observations, but it might be my failure of imagination.
Apologies if I sounded snippy, or if I demotivated you from commenting. I like your attitudes and topic choices generally; it’s just that I’m worried about the effects of creating a precedent for people to be talking about such topics on this particular site. Again, I’m not even confident that the effects are harmful on net, but there seems to have been widespread support of the recommendation to avoid politically charged examples, and it bothered me that people seemed to be letting that slip just because it’s what happens by default. In any case, the length of this thread probably suggests I care more about this issue than I actually do, and for now I’ll just agree that it would be nice to have clearer rules and bow out.
Every contribution starts out negative by default because it takes up space in recent comments and elsewhere, occupies the minds of LW commenters, and takes time to read. Beyond that, I admit your post caused no serious negative contributions. Combined with some other recent harmless threads, that counts against the “no politics” guideline. On the other hand, harmless violations of such guidelines can cause harmful violations in future top-level posts, and most of the harm may be in low-probability large-scale arguments, like the ones we had about gender.
I do think we keep avoiding crucial parts of the problem that are a bad idea to talk about, but that are frustrating to avoid talking about once the topic has been brought up (if only because of the sense that what has been said will be taken for a community consensus), and this frustration is probably what’s actually causing me to complain.
steven0461:
Fair enough. What we’re facing here is the same ongoing conflict of visions about what the range of appropriate topics on LW should be. My opinion is that if the forum as presently constituted isn’t capable of handling sensitive topics in a rational manner, and if any topic with even the remotest sensitive implications should therefore be avoided, then the whole project should be written off as a failure and the website reconstituted along the standard guidelines for technical forums (i.e. with a list of precise and strict definitions of suitable technical topics, and rigorous moderation to eradicate off-topic comments).
Certainly, I find it comically absurd that there should be a community of people boasting about their “rationality” who at the same time have to obsessively self-censor to avoid turning their discussions into food fights. I’m surely not alone in this assessment, and the bad PR from such a situation should be a sufficient reason for the owners of LW to undertake some radical steps (in one direction or another) to avoid it.
I’m not sure I understand what you’re saying here. Are you saying that there are some points relevant to this discussion that you’re reluctant to bring up because they are “a bad idea to talk about”?
The official motto in the logo is “refining the art of human rationality”, which implies that our rationality is still imperfect. I don’t see why it’s absurd or bad PR to say that we’re more rational than most other communities, but still not rational enough to talk about politics.
It’s still imperfect, but can’t people try a little harder?
When will we be rational enough to talk about politics (or subjects with political implications)? I am skeptical that any of the justifications for not talking about politics will ever change. Right now, we have a bunch of intelligent, rationalist people who have read at a least a smattering of Eliezer’s writings, yet who have differing experiences and perspectives on certain subjects, with a lot of inferential distance in between. We have veteran community members, and we have new members. In a few years, we will have exactly the same thing, and people will still be saying that politics is the “mind-killer.”
I have to wonder, if LW isn’t ready to talk about politics now, will we ever be ready (on our current hardware)? I am skeptical that we all can just keep exercising our rationality on non-political subjects, and then one day a bell will go ding, and suddenly a critical mass of us will be rational enough to discuss politics.
You can’t learn to discuss politics rationally merely by studying rationality in the abstract, or studying it when applied to non-political subjects. Rationality applied to politics is a particular skill that must be exercised. Biases will flare up even for intelligent, rationalist people who know better. The only way for LW to become good at discussing politics is to practice and get better.
(And even now, LW is not bad at discussing politics, and there have been many great political discussions here. While many of them have been a bit heated by the standards of LW, they are downright friendly compared to practically anywhere else.)
Unfortunately, the rest of the world doesn’t have the same level of humility about discussing political subjects. Many of the people most capable of discussing politics rationally seem to have the most humility. How long can we afford to have rationalists sit out of politics?
Hang on. Instrumental rationality.
If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.
If that doesn’t sound appealing (to me, it doesn’t, but people I admire often do throw themselves into political work) then talking politics is just shooting the shit. Even if you’re very serious and rational about it, it’s pretty much recreation.
I used to really like politics as recreation—it made me feel good—but it has its downsides. One, it can take up a lot of time that you could use to build skills, get work done, or have more intense fun (a night out on the town vs. a night in on the internet.) Two, it can make you dislike people that you’d otherwise like; it screws with personal relationships. Three, there’s something that bothers me morally, a little, about using issues that are serious life-and-death problems for other people as my form of recreation. Four, in some cases, including mine, politics can hurt your personal development in a particular way: I would palliate my sense of not being a good person by reassuring myself that I had the right opinions. Now I’m trying to actually be a better person in practice, and also trying to worry less about imaginary sins; it’s work in progress, of course, but I feel I don’t need my “fix” of righteous anger as much.
This is a personal experience, of course, but I think that it’s worth it for everyone to ask, “Why do I talk politics? Do I want to talk politics?”
“If you want to make political impact, don’t have discussions about politics on blogs; go do something that makes the best use of your skills. Start an organization, work on a campaign, make political issues your profession or a major personal project.”
You omit the most important step, which comes before starting an organization. That’s figuring out what politics this organization should espouse and how it should espouse those politics.
If my views are almost diametrically opposed to Robin Hanson’s, and I have no good reason to think I’m more rational than Robin or otherwise in a better epistemic position, I’m not rationally justified in setting up an organization to espouse my views because I should consider, in that event, that my views have at least a .5 chance of being wrong, probably much higher. The worst think people can do is set up political projects based on ill-considered principles to end up advocating the wrong policies. As long as rational, informed people disagree, one isn’t entitled to a strongly held political position.
What you said might make sense if political debate were strictly about means and there was general agreement on ends. But it is not. And your views on the ends of policy are worth every bit as much as Dr. Hanson’s, however much you worry that his thinking might be better than yours concerning means.
Do you think having LW discuss politics will help save the world? If so, how do you envision it happening?
Just to make sure there is no confusion about who stands where on the issue, I’d like to re-emphasize that I definitely don’t support making politics a prominent item on the discussion agenda of LW. What I am concerned about are topics that are on LW’s discussion agenda as presently defined, but have some implications about political and other charged issues, and the question of whether these should be avoided. (Though of course this is complicated by the fact that the present discussion agenda is somewhat vague and a matter of some disagreement.)
Why do you find it beneficial to bring up implications about political and other charged issues, when discussing topics that are on LW’s discussion agenda?
I can understand it if you’re making some point about improving rationality in general, and the best example to illustrate your point happens to be political, and you judge the benefit of using that example to be worth the cost (e.g., the risk that LW slides down the slippery slope towards politics being prominently debated, and others finding it difficult to respond to your point because they want to avoid contributing to sliding down that slippery slope).
If it’s more like “btw, here are some political implications of the idea I was talking about” then I think we should avoid those.
It could be that by far the main corruptor of rationality, which does by far the most damage however you want to measure it, is the struggle for political power. If that’s the case, then it may be unavoidable to discuss power and therefore politics.
The high point of human rationality is science, but as it happens, the scientific establishment has been so thoroughly dominated by the government (government supports much of academia, government supports much of science through grants, government passes laws which make it difficult to conduct science without official government approval, government controls the dissemination of scientific claims) that corruption of science by politics seems inevitable. If in fact science is corrupt from top to bottom (as it may be), then such corruption is almost certainly almost entirely at the hands of the state, and is therefore almost certainly political. So, if science is thoroughly corrupt, then it is almost certainly virtually impossible to discuss that corruption at all seriously without getting heavily into politics.
The poster child perhaps, but I wouldn’t go as far as to say the high point. :)
Wei_Dai:
I don’t think one should bring up such implications just for the hell of it, when they contribute nothing of substance. I also agree that among otherwise equally useful examples, one should use those that are least distracting and that minimize the danger of dissension. There’s a simple cost-benefit case there, which I don’t dispute. However, it seems to me that many relevant topics are impossible to discuss without bringing up such implications.
Take for example my original post that started this discussion. For anyone who strives to be less wrong about almost anything, one of the absolutely crucial questions is what confidence should be assigned to what the academic mainstream says, and in this regard, I consider the topic of the post extremely relevant for LW. (If you believe otherwise, I would be curious to see the argument why—and note that what I’m arguing now is independent of what you might think about the quality of its content.) Now, I think nobody could dispute that on many topics the academic opinion is biased to some extent due to political and ideological influences, so it’s important to be able to recognize and evaluate such situations. Moreover, as far as I see, this represents a peculiar class of bias that cannot be adequately illustrated and discussed without bringing up some concrete examples of biases due to ideological or political influences. So, how could one possibly approach this issue while strictly avoiding the mention of anything that’s ideologically charged at least by implication?
Yet some people apparently believe that this line of inquiry already goes too far towards dangerous and undesirable topics. If this belief is correct, in the sense that maintaining a high quality of discourse really demands such a severe restriction on permissible topics, then this, in my opinion, decisively defeats the idea of having a forum like LW, under any reasonable interpretation of its mission statement, vague as it is. It effectively implies that people are inherently incapable of rational discourse unless it’s stringently disciplined and focused on a narrow range of topics, the way expert technical forums are. Because this is definitely not the only example of how charged issues will inevitably be arrived upon by people discussing the general problems of sorting out truth from bias and nonsense.
There are also other important points here, on which I’ve already elaborated in my other comments, which all stem from the same fundamental observation, namely that those topics where one needs an extraordinary level of rationality to escape bias and delusion and often exactly those that are commonly a matter of impassioned and polarized opinion. In other words, general skills in rational thinking and overcoming bias are of little use if one will stick to technical topics in which experts already have sophisticated, so to say, application-specific techniques for eliminating bias and nonsense. (Which often work well—one can easily think of brilliant scientists and technical experts with outright delusional opinions outside of their narrow specialties—and when they don’t, the issue may well be impossible to analyze correctly without getting into charged topics.) But even if you disagree with my view expressed in this last paragraph, I think your question is adequately answered by the points I made before that.
How about using an example from the past? A controversy that was ideologically charged at some point, but no longer inflames passions in the present? I’m not sure if there are such examples that would suit your purpose, but it seems worth looking into, if you hadn’t already.
Overall I don’t think we disagree much. We both think whether to bring up political implications is a matter of cost-benefit analysis and we seem to largely agree on what count as costs and what as benefits. I would just caution that we’re probably biased to over-estimate the net benefit of bringing up political implications since many of us feel strongly motivated to spread our favorite political ideas. (If you’re satisfied that you’ve already taken into account such biases, then that’s good enough for me.)
Wei_Dai:
Trouble is, the present system that produces reputable and accredited science and scholarship is a rather novel creation. Things worked very differently as recently as two or three generations ago, and I believe that an accurate general model for assessing its soundness on various issues necessarily has to incorporate judgments about some contemporary polarized and charged topics, which have no historical precedent that would be safely remote from present-day controversies. As Constant wrote in another reply to your above comment, modern science is so deeply intertwined with the modern system of government that it’s impossible to accurately analyze one without asking any questions about the other.
And to emphasize this important point again, I believe that coming up with such a model is a matter of supreme importance to anyone who wants to have correct views on almost any topic outside of one’s own narrow areas of expertise. Our society is historically unique in that we have these vast institutions whose mission is to produce and publish accurate insight on all imaginable topics, and for anyone intellectually curious, the skill of assessing the quality of their output is as important as recognizing edible from poisonous fruit for a forager.
That is surely a valid concern, and I probably display this bias myself at least occasionally. Like most biases, however, it also has its mirror image, i.e. the bias to avoid questions for fear of stirring up controversy, which one should also watch for.
This is not only because excessive caution means avoiding topics that would in fact be worth pursuing, but also because of a more subtle problem. Namely, the set of all questions relevant for a topic may include some safe and innocent ones alongside other more polarizing and charged ones. Deciding to include only the former into one’s assessment and ignoring the latter for fear of controversy may in fact fatally bias one’s final conclusions. I have seen instances of posts and articles on LW that, in my opinion, suffer from this exact problem.
As far as I know, nobody cares what LessWrong commenters think about political issues. LessWrong should concentrate on less crowded topics where it potentially has actual influence, like AI risks.
Do you (pl.) think it would be valuable to have a discussion topic on whether political discussion could be fruitful (possibly with links to relevant discussions, etc.)?
(Not to say “take it elsewhere”, but rather, “should we have this discussion somewhere it’ll be easier to keep track of”.)
Merely saying that there are topics too inflammatory even for LW is one thing, but remember that the context of my remark was a discussion of whether topics should be avoided even if they have only indirect implications about something that might inflame passions. The level of caution that some people seem to believe should be exercised would in my opinion, if really necessary, constitute evidence against the supposedly high level of rationality on LW. (And on many people, the contradiction would also have a bad PR effect.)
Please also see my above reply to Vladimir Nesov in which I elaborate on this further.
Fallacy of gray. Nobody is perfectly rational, but that doesn’t make all people equally rational. Also, you used the inflammatory and imprecise “boasting” characterization.
While not relying on helpful techniques is a good way of signaling ability, it’s a bad way of boosting performance. The virtue of humility is in taking every precaution even if all seems fine already, or even if the situation looks hopeless.
On the practical question, I think eliminating politics was an inspired decision that should continue to be followed, and I think the lead article was not political; I also think it’s the best post in a good while. Nevertheless, I find the fact that we must avoid politics troubling. If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence. This is a nice empirical test of the value and possibility of becoming more rational by the methods we employ, a perspective we should consider an empirical question. It’s a shame we can’t conduct this test.
I will be very impressed if we can get Aumann agreement on hot political issues.
I suspect that the result on many of them would be convergence to realizing that we don’t know what the best solution is, but that might be my prejudices talking.
It’s worth noting that “we” is ill-defined here.
Supposing that what this site does successfully improves rationality among its participants, then we should expect that someone like me who has only been here for a few months would be less rational than the folks who have been around for years and benefiting from the site.
But a discussion of politics here would not exclude me, so even in that scenario we would expect such a discussion not to lead to convergence.
The proper empirical test, I suppose, would be to identify cohorts based on their tenure here, and conduct a series of such conversations within each such cohort—say, once a year—and evaluate whether a given cohort comes closer to convergence from year to year.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
If we can distinguish between preference and accuracy claims, that would be quite a large step towards rationality.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.
Vladimir_Nesov:
It would be such a fallacy if I had claimed that one must either reach absolute perfection in this regard or admit being no better than others. In reality, however, I claimed that people who have to avoid any discussion at all that has even indirect and remote implications about sensitive topics for fear of discourse breaking down have no grounds for claiming to be somehow more “rational” than others (controlling of course for variables like intelligence and real-life accomplishment).
In retrospect, yes, I should have expressed myself more diplomatically. Also, I didn’t mean to imply that everyone or even a large part of the participants behave like that. However, it is not at all rare to see people on LW making remarks about “rationality” whose self-congratulatory aspect is, if not explicit, not too terribly subtle either. This, in my opinion, is bad PR already because qualifying oneself is low status as a general principle, and a combination of such statements with an admission of inability to maintain the quality of discourse about all but the most innocent topics gives the whole thing a tinge of absurdity. That, at least, is my honest impression of how many people are going to see these things, with clear implications on the PR issues.
On the other hand, if the quality of discourse outside of technical topics really cannot be maintained, then the clear solution is to formulate a strict policy for what’s considered on-topic, and enforce it rigorously. That would not only make things function much better, but it would also be excellent from a PR perspective. (Rather than giving off a bad “we can’t handle sensitive topics” impression, it would give off a high status “we don’t want to be bothered with irrelevancies” impression.)
Maybe they do, maybe they don’t, but you didn’t ask. Basically you infer a conclusion here, and claim that no proof to the contrary is therefore possible.
When you have made this argument before, I responded:
It seems inappropiate to me for you to repeat this argument without addressing my response.
JGWeissman,
Please pardon my lack of response to your argument—back in that thread the volume of replies to my comments became too large for me to respond to all of them. Better late than never, though, so here is my response.
I certainly don’t think constant discussions of everyday politics on LW would be interesting or desirable. Someone who wants to do that has countless other places on the internet, tailored to all possible opinions and tastes, and there is absolutely no need to clutter up LW with it. However, what we’re debating is at the other extreme, namely whether there should be a strict censorship (voluntary or not) of all discussions that have even remote implications in politics and other topics that are likely to inflame passions.
I think the answer is no, for several reasons. First, there are interesting questions relevant for issues at the core of LW’s mission statement that inevitably touch on sensitive topics. Second, for some potentially sensitive questions I find extremely interesting (and surely not just I), LW really is a venue where it’s possible to get a uniquely cool-headed and rational analysis, so avoiding those would mean forsaking some of the forum’s greatest potentials. Finally, as I’ve already mentioned, the idea of a self-congratulatory “rationalist” community that in fact suffers from the same problems as any other place whenever it comes to sensitive topics is comically bad PR for whatever causes LW is associated with.
Of course, it may be that LW is not capable of handling sensitive topics after all. But then, in my opinion, the present way it’s constituted doesn’t make much sense, and it would benefit from a reorganization that would impose much more precisely defined topic requirements and enforce them rigorously.
You seem to be restating your position, without actually addressing my point that a policy that takes into account the likely behaviours of LW members of various levels of skill and experience, including those who have recently joined, does not reflect on the capabilities of the experienced, high level members.
If you can not address this point, you should stop repeating your argument that such rational people should be able to handle political discussion.
JGWeissman:
I don’t see how this objection is specific to sensitive topics. Assuming that regular participants maintain high enough standards, incompetent attempts by newbies to comment on sensitive topics should be effectively discouraged by downvoting, as in all other debates. Even in the most innocent technical discussions, things will go downhill if there is no mechanism in place to discourage unproductive and poorly thought out comments. In either case, if the voting system is ineffective, it means that more stringent moderation is in order.
On the other hand, if even the behavior of regular participants is problematic, then we get back to the problems I was writing about.
In innocent technical discussions, users will generally base their votes only on the merits of the comments they’re voting on. In sensitive political discussions, some will vote based on ideological agreement.
A problem common to both cases is that LessWrong is hesitant to vote anything down below zero, possibly for good morale-related reasons.
I’m not necessarily advocating complete censorship. Special cautionary reminders around political topics and disciplined downvoting might do the trick.
I don’t see evidence for bad PR here. I haven’t seen anyone cite the politics taboo as a reason to shun LessWrong, and in general it isn’t unusual for sites to have rules like this. While it would certainly be embarrassing if the average LessWrong commenter weren’t at least a little more rational than the average internet commenter, productive political discussion between internet commenters not pre-selected for agreement is a notoriously hard problem.
If you’re worried about bad PR, I suspect there’s a better case that bad PR will be caused by LessWrong arriving at conclusions that are true but disreputable.
Sure.
Could someone point me to where the politics taboo is actually articulated? After re-reading Eliezer’s post politics is the mindkiller, he identifies many of the pitfalls of discussing gender politics, but I never got the sense that he advocated prohibiting discussion of controversial political subjects:
steven0461:
That is indeed a good point. Still, I do think my original concern is valid too.
In any case, given the opinions exchanged in this discussion (and other similar ones), I do believe that LW is in need of a clearer official policy for what is considered on-topic. I find commenting here a lot of fun, and what I write is usually well received as far as the votes and replies appear to indicate, but occasional comments like yours leave me with an unpleasant impression that a significant number of people might strongly disapprove of my attitudes and choices of topics. I certainly have no desire to do anything that breeds ill will, but lacking clearer rules, it seems to me that this conflict (assuming it’s significant) is without an obvious resolution, unless we are to treat any complaint as a liberum veto (which I don’t think would be workable as a general principle).
Well, you have sure whetted my curiosity with that. I honestly don’t see anything in the post and the subsequent comments that warrants such grave observations, but it might be my failure of imagination.
Apologies if I sounded snippy, or if I demotivated you from commenting. I like your attitudes and topic choices generally; it’s just that I’m worried about the effects of creating a precedent for people to be talking about such topics on this particular site. Again, I’m not even confident that the effects are harmful on net, but there seems to have been widespread support of the recommendation to avoid politically charged examples, and it bothered me that people seemed to be letting that slip just because it’s what happens by default. In any case, the length of this thread probably suggests I care more about this issue than I actually do, and for now I’ll just agree that it would be nice to have clearer rules and bow out.
Why did you think it was low-probability? I put it at very high probability.