Ok, there’s no way to say this without sounding like I’m signalling something, but here goes.
As I’ve already explained, there’s a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it’s stupid, otherwise I wouldn’t say it. But at least I’m not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don’t demand that they accept “Chris isn’t saying anything stupid” as an axiom in order to engage with me.
“If you can’t say something you are very confident is actually smart, don’t say anything at all.” This is, in fact, why I don’t say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I’ve had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.
Somewhat relatedly, I’ve begun to wonder if “rationalism” is really good branding for a movement. Rationality is systematized winning, sure, but the “rationality” branding isn’t as good for keeping that front and center, especially compared to, say the effective altruism meme.
In my opinion, it’s actually terrible branding for a movement. “Rationality is systematized winning”; ok, great, what are we winning at? Rationality and goals are orthogonal to each-other, after all, and at a first look, LW’s goals can look like nothing more than an attempt to signal “I’m smarter than you” or even “I’m more of an emotionless Straw-Vulcan cyborg than you” to the rest of the world.
This is not a joke, I actually have a friend who virulently hates LW and resents his friends who get involved in it because he thinks we’re a bunch of sociopathic Borg wannabes following a cult of personality. You might have an impulse right now to just call him an ignorant jerk and be done with it, but look, would you prefer the world in which you get to feel satisfied about having identified an ignorant jerk, or would you prefer the world in which he’s actually persuaded about some rationalist ideas, makes some improvements to his life, maybe donates money to MIRI/CFAR, and so on? The latter, unfortunately, requires social engagement with a semi-hostile skeptic, which we all know is much harder than just calling him an asshole, taking our ball, and going home.
So anyway, what are we trying to do around here? It should be mentioned a bit more often on the website.
(At the very least, my strongest evidence that we’re not a cult of personality is that we disagree amongst ourselves about everything. On the level of sociological health, this is an extremely good sign.)
That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave “they avoid blue-green politics” as a reason LessWrongians are more rational than other people. It took a day before it clicked that “blue-green politics” here basically just meant “partisanship.” But complaining about partisanship is old hat—literally. America’s founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.
While I do agree about the jargon issue, I think the contrarianism and the meta-contrarianism often make people feel they’ve arrived to A Rational Answer, at which point they stop thinking.
For instance, if Americans have always thought their political system is too partisan, has anyone in political science actually bothered to construct an objective measurement and collected time-series data? What does the time-series data actually say? Besides, once we strip off the tribal signalling, don’t all those boringly mainstream ideologies actually have some few real points we could do with engaging with?
(Generally, LW is actually very good at engaging with those points, but we also simultaneously signal that we’re adamantly refusing to engage in partisan politics. It’s like playing an ideological Tsundere: “Baka! I’m only doing this because it’s rational. It’s not like I agree with you or anything! blush”)
I’ve made no secret of the fact that I’m not a big fan of the principle of charity—often defined as the rule that you should interpret other people’s arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid.
Ok, but then let me propose a counter-principle: Principle of Informative Calling-Out. I actively prefer to be told when I’m wrong and corrected. Unfortunately, once you ditch the principle of charity, the most common response to an incorrect statement often becomes, essentially, “Just how stupid are you!?”, or other forms of low-information signalling about my interlocutor’s intelligence and rationality compared to mine.
I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn’t grant that advantage. Based on this, if you think it’s wrong to dismiss the views of high-IQ people, you shouldn’t be dismissive of mainstream philosophy. But in fact I think LessWrong’s oft-noticed dismissiveness of mainstream philosophy is largely justified.
You should be looking at this instrumentally. The question is not whether you think “mainstream philosophy” (the very phrase is suspect, since mainstream academic philosophy divides into a number of distinct schools, Analytic and Continental being the top two off the top of my head) is correct. The question is whether you think you will, at some point, have any use for interacting with mainstream philosophy and its practitioners. If they will be useful to you, it is worth learning their vocabulary and their modes of operation in order to, when necessary, enlist their aid, or win at their game.
Ok, there’s no way to say this without sounding like I’m signalling something, but here goes.
“If you can’t say something you are very confident is actually smart, don’t say anything at all.” This is, in fact, why I don’t say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I’ve had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.
In my opinion, it’s actually terrible branding for a movement. “Rationality is systematized winning”; ok, great, what are we winning at? Rationality and goals are orthogonal to each-other, after all, and at a first look, LW’s goals can look like nothing more than an attempt to signal “I’m smarter than you” or even “I’m more of an emotionless Straw-Vulcan cyborg than you” to the rest of the world.
This is not a joke, I actually have a friend who virulently hates LW and resents his friends who get involved in it because he thinks we’re a bunch of sociopathic Borg wannabes following a cult of personality. You might have an impulse right now to just call him an ignorant jerk and be done with it, but look, would you prefer the world in which you get to feel satisfied about having identified an ignorant jerk, or would you prefer the world in which he’s actually persuaded about some rationalist ideas, makes some improvements to his life, maybe donates money to MIRI/CFAR, and so on? The latter, unfortunately, requires social engagement with a semi-hostile skeptic, which we all know is much harder than just calling him an asshole, taking our ball, and going home.
So anyway, what are we trying to do around here? It should be mentioned a bit more often on the website.
(At the very least, my strongest evidence that we’re not a cult of personality is that we disagree amongst ourselves about everything. On the level of sociological health, this is an extremely good sign.)
While I do agree about the jargon issue, I think the contrarianism and the meta-contrarianism often make people feel they’ve arrived to A Rational Answer, at which point they stop thinking.
For instance, if Americans have always thought their political system is too partisan, has anyone in political science actually bothered to construct an objective measurement and collected time-series data? What does the time-series data actually say? Besides, once we strip off the tribal signalling, don’t all those boringly mainstream ideologies actually have some few real points we could do with engaging with?
(Generally, LW is actually very good at engaging with those points, but we also simultaneously signal that we’re adamantly refusing to engage in partisan politics. It’s like playing an ideological Tsundere: “Baka! I’m only doing this because it’s rational. It’s not like I agree with you or anything! blush”)
Ok, but then let me propose a counter-principle: Principle of Informative Calling-Out. I actively prefer to be told when I’m wrong and corrected. Unfortunately, once you ditch the principle of charity, the most common response to an incorrect statement often becomes, essentially, “Just how stupid are you!?”, or other forms of low-information signalling about my interlocutor’s intelligence and rationality compared to mine.
You should be looking at this instrumentally. The question is not whether you think “mainstream philosophy” (the very phrase is suspect, since mainstream academic philosophy divides into a number of distinct schools, Analytic and Continental being the top two off the top of my head) is correct. The question is whether you think you will, at some point, have any use for interacting with mainstream philosophy and its practitioners. If they will be useful to you, it is worth learning their vocabulary and their modes of operation in order to, when necessary, enlist their aid, or win at their game.