Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
When I saw the question posted in the discussion, I thought it had a potential to be a good discussion topic. After all, I reasoned, there are a many thoughtful people on LessWrong who are interested in politics, history, political philosophy. There are a lot of insights to be gained from discussing interesting and difficult questions about society. And there are quite a few insightful neoreactionaries here, eg. Konkvistador and others (some of whom sadly no longer actively participate on LW). And some neoreactionary ideas are interesting and worth a look.
Despite all this, it seems that currently so many of subthreads of that thread basically turned into an unproductive flamewar. Why? Well, politics is the mind-killer, of course. What should have I expected. Nevertheless, I think it could have been avoided. I am not totally against political discussion in some threads. In fact, even many comments in that thread are good. Well, taken individually many comments are quite reasonable, for example, some of them explain a certain position and while you may agree or disagree with a stated position, you can’t say anything bad about the comment itself. However, when aggregated they do not form a productive atmosphere of the thread. While many comments are reasonable, it is suspiciously easy to sort most of them into a two group—pro- and anti- neoreaction with little middle ground who could act as (sort of) judges that could help evaluate the claims of both sides. There is suspiciously little belief updating (even on small issues) going on (maybe it is different among lurkers), which is probably a very important measure of whether discussion was actually productive (I do not claim that all LessWrong discussions are productive. A lot of them aren’t). Many people aren’t arguing in good faith. Some of them even post links to this discussion in other forums as if it is representative of LessWrong as a whole.
I am not calling for censorship or deletion of certain comments. Nor I want discussing controversial issues being prohibited. I am calling for a moment of reflection about mind-killing. For a moment of consideration about whether you yourself aren’t mind-killed, whether you yourself are no longer updating your beliefs honestly, whether you are no longer arguing in a good faith (even if the other side lowered its standards first). I don’t know what ritual could be devised to reinforce this point. Maybe it is true that there is only three stable equilibria points (Vladimir_M, alas, no longer comments on LessWrong). Is it possible to devise something, I don’t know, maybe a ritual or social norm or yet something else that would help keep things in an unstable point of both having the broad scope of questions and the high quality discussion? Or is any such attempt doomed to crumble due to the influence of discussion standards of the outside world?
In addition to that, I think that the question itself was poorly worded. It was way too broad. The questions that are likely to be polarizing would benefit from being much more narrow and much more specific. Maybe this way everyone would be talking about the same thing, as it would be much harder to try to steer the discussion into things that you like to talk about. Maybe this way everyone would have clearer and more concrete idea about what everyone else is talking about, making it easier to reason about the whole situation, easier to weigh the evidence in favour of one or the other position.
I agree it would be good to find a better method to debate politics. Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated. (So now the burden is on the people who want to debate politics here.)
It seems to me that in political topics most of updating happens between the conversation. It’s not like you say something and the other person is “oh, you totally convinced me, I am changing my mind now”. Instead, you say something, the other person looks at you very suspiciously and walks away. Later they keep thinking about it, maybe google some data, maybe talk with other people, and the next time you meet them, their position is different from the last time.
For example, I have updated, from mildly pro-NR to anti-NR. I admit they have a few good points. But this is generally my experience with political movements: they are often very good at pointing out the obvious flaws of their competitors; the problem is that their own case is usually not much better, only different. I appreciate the few insights, they made me update, and I still keep thinking about some stuff. I just didn’t come to the same conclusion; I separated the stuff that makes sense to me from the stuff that doesn’t. Just like I try to draw good ideas e.g. from religion, without becoming religious. Instead of buying the whole package, I take a few bricks and add them to my model of the world. There are a few bricks in my model now that an outside observer could call “neoreactionary”, although that would probably depend on the exact words I would use to describe them (because they are not unique for NR). The other bricks I have judged separately, and I was unimpressed. That’s where I am now.
There is also this irritating fact that NRs keep associating themselves with LW. I consider that a huge dishonesty and in a way an attack on this community. If people are impressed by LW, this can make them more open towards NR. If people are disgusted by NR, this can make them dislike LW by association. They gain, we lose. It never goes the other way round; no one is going to debate overcoming their cognitive biases just because they fell in love with NR. To put it bluntly, we are used as a recruitment tool for some guy’s cult, and all his shit falls on our heads. Why should we tolerate that? (This, especially #1 should be a required reading for every nerd.) That alone makes me completely unwilling to debate with them, because such debates are then used as further evidence that “LW supports NR”. (As an analogy, imagine how much would you want to have a polite debate with a politician you dislike, if you know that the reason he debates with you is that he can take a photo of you two having a conversation, put it on his personal webpage, and claim that you are one of his supporters, to impress people who know you.) I refuse to ignore this context, because I am strongly convinced that NRs are fully aware of what they are doing here.
So even if we try having rational debates about politics, I would prefer to try them on some other political topics.
Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated. (So now the burden is on the people who want to debate politics here.)
I think this is a great suggestion, since it allows different standards for different types of political discussion, as well as giving us a chance to actually observe which set of rules leads to most productive discussion.
It seems to me that in political topics most of updating happens between the conversation.
Well, I think this is probably true in my experience. On the other hand, since this is an internet forum, no one is forcing them to post their answers immediately. Maybe for most people it takes months to change their position on significant political belief even if they have a lot of evidence that contradict that belief, thus we do not expect that given a person he/she would change their beliefs after a conversation. However, thinking at the margin, there must have been people who were on the fence. There must have been people who quickly jump from one set of beliefs to another one whenever someone posts an interesting essay. Maybe for them a week would have been enough to update? And since this was not a real time conversation, they could post about their update after a week and it was their pride that prevented them from doing so? However, less people seemed be on the fence than I expected, “the distribution of opinions about neoreaction” seemed bimodal.
However, now that I write this, I realize that such people would have been less motivated to write their beliefs in the first place, thus they were underrepresented in the total volume of posts in that thread. Thus, it is possible that the impression of bimodality is partially an artefact of that.
It is good to hear that you have found something in that thread that you thought was worth updating on. I also agree that neoreaction is better at finding flaws of other movements (for example, I think that some trends they describe as dangerous are actually dangerous) and providing some intellectual tools for thinking about the world that can be added into one’s toolbox (I am not a neoreactionary, whether those tools accurately describe the world is a different question, to me it seems that they are at least worth thinking about, can they shed some light on things that other intellectual tools neglect?) than providing an alternative course of action, an alternative model of what kind of society is good or an alternative movement that would be worth following (it seems to me that neoreaction is more like, well, reaction to progressivism (in neoreactionary sense of the word) rather than coherent set of goals in itself (it seems to me that the groups that compose neoreaction are as different from each other as either of them is from progressivism)). So, basically, I think my position towards neoreaction is somewhat similar to yours.
There is also this irritating fact that NRs keep associating themselves with LW. I consider that a huge dishonesty and in a way an attack on this community. If people are impressed by LW, this can make them more open towards NR. If people are disgusted by NR, this can make them dislike LW by association. They gain, we lose. It never goes the other way round; no one is going to debate overcoming their cognitive biases just because they fell in love with NR. To put it bluntly, we are used as a recruitment tool
This is where my intuition differs from yours. Maybe this is because I have never been to a LW meetup, nor I have ever met another person who reads LW in real life. In addition, I have never met a single neoreactionary in real life. Or maybe I simply don’t know about them, I don’t think I have ever met a single SJW in real life either. I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people. Although I obviously understand that without all these people this place would not exist, the mental image of LessWrong as “a place (maybe cloudlike, maybe vaguely defined) where LW style discussion about LW topics happens (style and topics are most important part of what defines LW to me)” feels more real to me than the mental image of community of people. I do not know much about LW posters beyond what they post here or on other blogs. For example, when I first started reading LessWrong, for quite a long time I thought that Yvain and Gwern were women. Why did I think this? I don’t remember. What I’m trying to say is that I guess that the difference between our intuitions may come from the difference between how we think about these two layers (place, style of discussion, topics vs community of real people). It may be a bias on my part (i.e. I don’t know what kind of thinking leads to an optimal outcome, I am not sure how exactly such optimal outcome would look like) that I neglect the community building aspect of LessWrong, I am not sure. I haven’t disentangled my thoughts about these things in my mind yet, they are very messy. This post is partially an attempt to write down my intuitions about this (as you can see, it is not a very coherent argument), maybe it will help myself to clarify some things.
In addition to that, while an individual identity is relatively well defined (“I am me”), identity of someone who belongs (or does not belong) to a certain group is much less clearly defined and whether someone actively feels belonging to a certain group seems to depend on a situation.
What I am trying to say is that when I see neoreactionaries commenting on LessWrong, I do not perceive them as “them” if they talk in a manner that is close enough to LessWrong style about the topics that are LW topics. In this situation, I do not perceive LWers and LW neoreactionaries as distinct groups in a way that a statement about the attack on the community would make sense. In fact, in this situation, only a small part of my attention is dedicated to identity related thoughts. The situation is different when, e.g. I read someone’s comments (usually outside of LessWrong) attacking LessWrong. In this case the part of my attention that is dedicated to identity related things is much larger. In such situations, I do think of myself as someone who regularly reads LessWrong and finds it a great place with a lot of interesting people who write about their insights, when someone attacks it, my emotions create an urge to defend LessWrong. In such situations much larger part of my attention is dedicated to this, and I do start thinking in terms of who belongs to what group. But unless it is neoreactionaries who are attacking LessWrong, I usually still do not feel (I am just describing what I feel in such situations) that LW neoreactionaries (not neoreactionaries in general) are distinct group. Thus, in my case, it seems that it is conflicts and disagreements that create a sense of identity (even more than vice versa), since, as I have said, I have never participated in an offline LW community. (to be continued in the next comment)
less people seemed be on the fence than I expected, “the distribution of opinions about neoreaction” seemed bimodal
I suspect this is the polarizing effect of politics, not something specific for LW nor specific for neoreaction. We are talking about labels, not ideas. I may agree with half of ideas of some movement, and disagree with other half of ideas, but I usually have a clear opinion about whether I want to identify with a label or not.
I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people.
My mental image for LW community is more or less “people who have read the Sequences, and in general agree with them”. Yes, I am aware that in recent years many people ignore this stuff, to the degree where mentioning the Sequences is a minor faux pas. (And for a while it was a major faux pas, and some people loudly insisted that telling someone to read the Sequences is a lesswrongeese for “fuck you”. Not sure how much of that attitude actually came from the “Rational”Wiki.) That, in my opinion, is a bad thing, and it sometimes leads to reinventing the wheel in the debates. To put it shortly, it seems to me we have lost the ability to build new things, and became an online debate club. Still a high quality online debate club. Just not what I hoped for at the beginning.
What I am trying to say is that when I see neoreactionaries commenting on LessWrong, I do not perceive them as “them” if they talk in a manner that is close enough to LessWrong style about the topics that are LW topics.
LessWrong was built upon some ideas, and one of them was that “politics is the mindkiller” and that we strive to become more rational, instead of being merely clever arguers. At this moment, neoreactionaries are the group most visibly violating this rule. They strongly contribute to the destruction of the walled garden. Debating them over and over again is privileging a hypothesis; why not choose any other fringe political belief instead, or try creating a new one from scratch, or whatever?
And I guess that if we are to overcome biases we will have to deal with politics.
Politics is an advanced topic for a rationalist. Before going there, one should make sure they are able to handle the easier situations first. Also, there should be some kind of feedback, some way of warning people “you have strayed from the path”. Otherwise we will only have clever arguers competing using their verbal skills. When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity. They should realize how much they have failed the art of rationality by not realizing most of that on their own. They should update about their own ability to form epistemically correct political opinions. Instead of inventing clever rationalizations for the already written bottom line.
In my opinion, Yvain is the most qualified person for the task of debating politics rationally, and the only obvious improvement would be to somehow find dozen different Yvains coming from different cultural backgrounds, and let them debate with each other. But one doesn’t get there by writing their bottom line first.
To put it shortly, it seems to me we have lost the ability to build new things, and became an online debate club.
Did LW as a group ever have this ability? Going by the archives it seems that there were a small number (less than 10) of posters on LW who could do this. Now that they’re no longer posting regularly, new things are no longer produced here.
try creating a new one from scratch, or whatever?
A reasonable case could be made that this is how NRx came to be.
A reasonable case could be made that this is how NRx came to be.
If this is where NRx came from, then I am strongly reminded of the story of the dog that evolved into a bacterium. An alternative LW-like community that evolved into an aggresive political movement? Either everyone involved was an advanced hyper-genius or something went terribly wrong somewhere along the way. That’s not to say that something valuable did not result, but “mission drift” would be a very mild phrase.
Show me that movement in actual politics. Is any NRx-er running for office? Do they have an influential PAC? A think tank in Washington, some lobbyists, maybe?
Oh, I think we’re using the phrase “political movement” in different senses. I meant something more like “group of people who define themselves as a group in terms of a relatively stable platform of shared political beliefs, which are sufficiently different from the political beliefs of any other group or movement”. Other examples might be libertarianism, anarcho-primitivism, internet social justice, etc.
I guess this is a non-standard usage, so I’m open to recommendations for a better term.
Yep, looks like we are using different terminology. The distinction between political philosophy and political movement that I drew is precisely the difference between staying in the ideas/information/talking/discussing realm and moving out into the realm of real-world power and power relationships. What matches your definition I’d probably call a line of political thought.
Mencius Moldbug is a political philosopher. Tea Party is a political movement.
When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity
Sentiments like this are, in my opinion, a large part of why “politics is the mind-killer.” I am no neoreactionary, but I thought the SSC neoreaction anti-faq was extremely weak. You obviously thought it was extremely strong. We have parsed the same arguments and the same data, yet come out with diametrically opposed conclusions. That’s not how it’s supposed to work. And this is far from a unique occurrence. I frequently find the same article or post being held up as brilliant by people on one side of the political spectrum, and dishonest or idiotic by people on the other side.
It is not merely that people don’t agree on what’s correct, we don’t even agree on what a successful argument looks like.
I thought the SSC neoreaction anti-faq was extremely weak. You obviously thought it was extremely strong. We have parsed the same arguments and the same data, yet come out with diametrically opposed conclusions. That’s not how it’s supposed to work.
Well, sometimes that’s exactly how it’s supposed to work.
For example, if you have high confidence in additional information which contradicts the premises of the document in whole or in part, and VB is not confident in that information, then we’d expect you to judge the document less compelling than VB. And if you wished to make a compelling argument that you were justified in that judgment, you could lay out the relevant information.
Or if you’ve performed a more insightful analysis of the document than VB has, such that you’ve identified rhetorical sleight-of-hand in the document that tricks VB into accepting certain lines of reasoning as sound when they actually aren’t, or as supporting certain conclusions when they actually don’t, or something of that nature, here again we’d expect you to judge the document less compelling than VB does, and you could lay out the fallacious reasoning step-by-step if you wished to make a compelling argument that you were justified in that judgment.
I don’t want to focus on the anti-neoreactionary FAQ, because I don’t want to get this dragged into a debate about neoreaction. In particular I simply don’t know how Viliam_Bur parsed the document, what additional information one of us is privy to that the other is not. My point is that this is a general issue in politics, where one group of people finds a piece compelling, and another group finds a piece terrible.
And note too that this isn’t experienced as something emotional or personal, but rather as a general argument for the truth. In this case, VB thinks neo-reactionaries should be “deeply shocked and start questioning their own sanity.” In other words, he thinks this is basically a settled argument, and implies that people who persist in their neoreaction are basically irrational, crazy or something along those lines. Again, this is a general issue in politics. People generally believe (or at least, talk like they believe) that people who disagree with them politically are clinging to refuted beliefs in the face of overwhelming evidence. I don’t just think this is due to epistemic closure, although that is part of it. I think it’s partly an emotional and cultural thing, where we are moved for pre-rational reasons but our minds represent this to us as truth.
I am certainly not saying I am immune from this, but I don’t have the third-party view on myself. I am not saying I am right and Viliam_Bur is wrong on the case in point. But I do wonder how many neoreactionaries have been deconverted by that FAQ. I suspect the number is very low...
To the extent that you’re making a general point—which, if I’ve understood you correctly, is that human intuitions of truth are significantly influenced by emotional and cultural factors, including political (and more broadly tribal) affiliations—I agree with your general point.
And if I’ve understood you correctly, despite the fact that most of your specific claims in this thread are about a specific ideology and a specific document, you don’t actually want to discuss those things. So I won’t.
I’m happy to discuss specifics, just not about the neo reactionary FAQ. I agree with VB that LW has an unhealthy tendency that every discussion becomes about neo reaction, and I don’t like it.
Instead, how about this article. Jim Edwards is a bright guy, and he clearly intended to persuade with that post. And indeed he has plenty of commenters who think he was making a valuable point. Yet I am at a loss to say what it is. Here he is, claiming to have a graph showing that government spending affects economic growth, yet all that graph shows is changes in government spending. It doesn’t show a correlation, it doesn’t suggest causation, it doesn’t do anything of the sort. Yet some poeople find this persuasive.
When someone says they like dance music (for example), I feel like I’m missing out; they get joy out of something I hate, which in some ways makes them better than me, but fundamentally de gusts us non set disputandum. The older I get, the more I feel like that’s how all persuasion works.
Yup, those charts puzzle me, too (based on about five seconds of analysis, admittedly, but I have a strong preexisting belief that there are many examples of such silliness on the Internet, so I’m strongly inclined to agree that this particular chart is yet another example… which is of course yet another example of the kind of judgment-based-on-non-analytical factors we’re discussing).
How confident are you that this is how all persuasion works?
I don’t know how general this is, but I do think it’s an important factor that I don’t see discussed.
Another point is peer effects. I remember at school my physics teacher used to use proof by intimidation where he would attempt to browbeat and ridicule students into agreeing with him on some subtly incorrect argument. And he wouldn’t just get agreement because he scared people, the force of his personality and the desire to not look foolish would genuinely convince them. And then he’d get cross for real, saying no, you need to stand up for yourself, think through the maths. But if you can’t fully think through the soundness of the arguments, if you are groping around both on the correct and the incorrect answer, then you will be swayed by these social effects. I think a lot of persuasion works like that, but on a more subtle and long-term level.
But I don’t think it’s just confirmation bias. People do get won over by arguments. People do change their minds, convert, etc. And often after changing their mind they become just as passionate for their new cause as they ever were for the old. But what is persuasive and what is logical sometimes seem disjoint to different people.
You are right that these things afflict some areas more than others. Politics and religion are notoriously bad. And I do think a large part of it is that people simply have very different standards for what a successful argument looks like, and that this is almost an aesthetic.
Sure, confirmation bias is a force but it’s not an insurmountable force. It only makes changing one’s beliefs difficult, but not impossible.
But what is persuasive and what is logical sometimes seem disjoint to different people.
I agree and I don’t find this surprising. People are different and that’s fine.
Take the classic “Won’t somebody please think of the children!” argument. I, for example, find it deeply suspect to the extent that it works as an anti-argument for me. But not an inconsiderate number of people can be convinced by this (and, in general, by emotional-appeal strategies).
I guess what kind of people are convinced by what kind of arguments would be an interesting area to research.
But I do wonder how many neoreactionaries have been deconverted by that FAQ. I suspect the number is very low...
This is an interesting question that seems empirically testable—we could ask those people and make a poll. Although there is a difference between “believing that NRs are probably right about most things” and “self-identifying as NR”. I would guess there were many people impressed (but not yet completely convinced) by NR without accepting the label (yet?), who were less impressed after reading the FAQ. So the losses among potential NRs were probably much higher than among already fully convinced NRs.
That’s a warning sign, not a barbed-wire fence patrolled by guards with orders to shoot to kill.
why not choose any other fringe political belief instead, or try creating a new one from scratch, or whatever?
Neoreaction is an interesting line of thought offering unusual—and so valuable—insights. If you don’t want to talk about NRx, well, don’t. If you want to talk about different political beliefs, well, do.
some way of warning people “you have strayed from the path”
What is “the path”? LW is a diverse community and that’s one of its strengths.
When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity. They should realize how much they have failed the art of rationality by not realizing most of that on their own.
You did mention mindkill, didn’t you? I recommend a look in the mirror. In particular, you seem to be confusing rationality with a particular set of political values.
epistemically correct political opinions
Political opinions are expressions of values. Values are not epistemically correct or wrong—that’s a category error.
Talking about “neoreaction” (or any other political group) already is a package-deal fallacy. NRs have a set of beliefs. Each of those beliefs individually can be true or false (or disconnected from evidence). These beliefs should be debated individually. It is quite possible that within the set, some beliefs will be true, some will be false, and some will be undefined. Then we can accept the true beliefs, and reject the false beliefs. There is no need to use the word “neoreaction” anywhere in that process.
So, instead of having threads about neoreaction, we (assuming we are going to debate politics) should have threads about each individual belief (only one such thread at a time). Then we should provide evidence for the belief or against the belief. Then we should judge the evidence, and come to a conclusion, unconstrained by identity labels.
The fact that we are not already doing it this way, is for me an evidence on the meta level that we are not ready for having political debates.
Debating beliefs separately, understanding the conjuction fallacy, providing evidence, avoiding labels, tabooing words… this is all rationality 101 stuff. This is “the path” we have already strayed from. If we collectively fail at rationality 101, I don’t trust our ability to debate more complex things.
Political opinions are expressions of values. Values are not epistemically correct or wrong—that’s a category error.
Value is “I don’t want children to starve”. Political opinion is “we should increase the minimal wage (so the children will not starve)”. There is more than the value; there is also the model of the world saying that “increasing minimal wage will reduce the number of starving children (without significant conflict with other values)”. Other person may share the value, but reject the model. They may instead have a model that “increasing minimal wages increases unemployment, and thus increases the number of starving children”, and therefore have a political opinion “we should remove minimal wage (so the children will not starve)”. Same value, different models, different political opinions.
It seems to me that people usually differ more in their models than in their values. There are probably few people who really want to optimize the world to increase the number of starving children, but there are many people with political opinions contradicting each other. (Believing too quickly that our political opponents have different values is also covered in the Sequences.)
Each of those beliefs individually can be true or false (or disconnected from evidence). These beliefs should be debated individually.
I don’t think it’s quite that simple.
You are arguing for atomicity of beliefs as well as their independence—you are saying they can (and should) stand and fall on their own. I think the situation is more complicated—the beliefs form a network and accepting or rejecting a particular node sends ripples through the whole network.
Beliefs can support and reinforce each other, they can depend on one another. Some foundational beliefs are so important to the whole network that rejecting them collapses the whole thing. Consider e.g. Christianity—a particular network of beliefs. Some can stand or fall on their own—the proliferation of varieties of Christianity attests to that—but some beliefs support large sub-networks and if you tear them down, the rest falls, too. At the root, if you reject the belief in God, debating, for example, the existence of purgatory is silly.
The package-deal fallacy exists and is real, but excessive reductionism is a fallacy, too, and just as real.
If we collectively fail at rationality 101, I don’t trust our ability to debate more complex things.
Oh, I don’t trust our ability to debate complex things. But debate them we must, because the alternative is much worse. That ability is not a binary flag, by the way.
There is more than the value; there is also the model of the world
True, and these should be separated to the extent possible.
It seems to me that people usually differ more in their models than in their values.
I don’t know about that—I’d like to see more evidence. One of the problems is that people may seem to have the same values at the level of costless declarations (everyone is for motherhood and apple pie), but once the same people are forced to make costly trade-offs between things important to them, the real values come out and I am not sure that they would be as similar as they looked before.
[P]eople may seem to have the same values at the level of costless declarations (everyone is for motherhood and apple pie), but once the same people are forced to make costly trade-offs between things important to them, the real values come out and I am not sure that they would be as similar as the looked before.
Maybe. It seems to me that there could be two systems of political ideas—call them A and B—both of which are pretty credible when taken as wholes, but for which if you take any single proposition from one and examine it in the context of the other, it’s obviously wrong.
(The same thing happens with scientific theories. Key words: “Quine-Duhem thesis”.)
On the other hand, it does also happen that basically-unrelated ideas get bundled together as part of a package deal, and in that case we probably do generally want to try to separate them. So I’m not sure what the best way is to make the tradeoff between splitting and lumping.
(cont.)
I guess it is likely true that more people find about neoreaction on LessWrong than vice versa. However, it is not obvious to me that hardly anyone would even join LessWrong to discuss LW topics after being exposed to neoreaction first. I mean, MoreRight recommends its readers to read LessWrong and SlateStarCodex as well. Xenosystems has LW, SSC and OvercomingBias on its blogroll. Radish Magazine’s list of the people they admire includes Eliezer Yudkowsky. Obviously some of those people were LWers themselves, some might post links to LessWrong because they try to make their place more comfortable to LWers who might wander there. But still, I hope that at least some neoreactionaries would come here with an interest in LW topics (cognitive biases, future of humanity and artificial intelligence). I guess that it is probably true that neoreaction gains more members from LW than vice versa. This is the community layer. But there is also the intellectual toolbox layer. And I think that if LessWrong started discussing politics, having a few neoreactionaries (not just any neoreactionaries, but those who think in LW terms, are able to notice cognitive biases) here would probably be beneficial. And I guess that if we are to overcome biases we will have to deal with politics. You see, I fear that by paying attention only to cognitive biases that are easy to test in the lab we are like a proverbial drunk man searching for his keys under a lamp-post. For example, field of cognitive biases researches what happens inside a person’s head, but certain things from political science and economics such as median voter’s theorem, Duverger’s law and Motte-and-Bailey effect (when instead of happening inside a person’s head, it happens to a movement—when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey’s are quite distinct)), seems to be analogous enough so as to be thought of as yet another kind of biases that prevent from optimal decision making at a group level. And if we were to start discussion about things like these, it would be hard to avoid using political examples altogether. By the way, the idea that the list of biases LessWrong usually talks about is not exhaustive enough has already been discussed here.
So even if we try having rational debates about politics, I would prefer to try them on some other political topics.
Yes, definitely. I think that the political topics (especially at the beginning) would have to be much more specific and less related to the questions of identity.
Motte-and-Bailey effect (when instead of happening inside a person’s head, it happens to a movement—when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey’s are quite distinct))
This could just as easily be described, with the opposite connotation, as the movement containing some weakmans*, which makes me think that we need a better way of talking about this phenomenon. ‘Palatability spread’ or ‘presentability spread’? But that isn’t quite right. A hybrid term like ‘mottemans’ and ‘baileymans’ would be the worst thing ever. Perhaps we need a new metaphor, such as the movement being a large object where some parts are closer to you, and some parts are further away, and they all have some unifying qualities, and it is usually more productive to argue against the part that is closer to you rather than the part that is far away, even though focusing on the part that is far away makes it easy to other the whole edifice (weakmanning); and motte-and-baileying is ascribing to the further-away part of your own movement but pretending that you are part of the closer part.
*in the technical sense; their positions may be plenty strong but they are less palatable
Edit: Whoops no one will see this because it’s in an old open thread. Oh well.
What I had in mind was a situation when “a person from outside” talks to a person who “occupies a bailey of the movement” (for the sake of simplicity let’s call them “a movement”, although it doesn’t have to be a movement in a traditional sense). If the former notices that the position of the latter one is weakly supported, then the latter appeals not to the motte position itself, but to the existence of high status people who occupy motte position, e.g. “our movement has a lot of academic researchers on our side” or something along those lines, even though the position of the said person doesn’t necessarily resemble that of the “motte people” beyond a few aspects, therefore “a person from outside” should not criticize their movement. In other words, a criticism against a particular position is interpreted to be a criticism against the whole movement and “motte people”, thus they invoke “a strongman” do deflect the criticism from themselves.
I think you made a very good point. From the inside, if an outsider criticizes a certain position of the movement, it looks as if they attacked a weakman of the movement and since it feels like they attacked a movement itself, an insider of the movement feels that they should present a stronger case for the movement, because allowing an outsider to debate weakmen without having to debate stronger positions could give the said outsider and other observers an impression that these weakmen was what the movement was all about. However, from the said outsider’s perspective it looks like they criticized a particular position of a movement, but then (due to solidarity or something similar) the movement’s strongmen were fielded against them, and from the outsider’s perspective it does look like that the movement pulled a move that looks very similar to a motte-and-bailey.
Whoops no one will see this because it’s in an old open thread. Oh well.
I think that replying to old comments should be encouraged. Because otherwise if everyone feels that they should reply as quickly as possible (or otherwise not reply at all), they will not think their positions through and post them in a hurry.
One aspect of neoreactionary thought is that it relies on historical narratives instead of focusing on specific claims that could be true or false in a way that can be determined by evidence.
Classifying traditions by their cladistic ancestry is a fine example. The statement that Universalism exists, that it is a descendant of Christianity, and that it is not a descendant of Confucianism, can only be interpreted intuitively. It is not a logical proposition in any sense. It has no objective truth-value. It is a pattern that strikes me as, given certain facts, self-evident. In order to convince you of this proposition, I repeat these facts and arrange them in the pattern I see in my head. Either you see the same pattern, or another pattern, or no pattern at all.
Given such an idea of how reasoning works, it’s not clear that there an easy solution that allows for agreeing on a social norm to discuss politics.
It isn’t clear to me that this sort of thought should be called “reasoning”, a term which is commonly used for dealing with propositions that do have truth-values, at all.
It seems to me to be more in the vein of “poetry” or “poetry appreciation”.
It seems to me to be more in the vein of “poetry” or “poetry appreciation”.
I don’t think that’s entirely fair to Moldbug. Illustrating patterns and using the human ability for pattern matching does have it’s place in knowledge generation. It’s more than just poetry appreciation.
After reading the quote I thought that he was trying to make an analogy between finding a historical narrative from historical facts and drawing a curve that has the best fit to a given series of data points. Indeed, saying that such curve is “true” or “false” does not make a lot sense, since just because a point lies outside the graph of a function does not mean that this function cannot be a curve of best fit—one cannot decide that from a small number of data points, one needs to measure (in)accuracy of the model over the whole domain. Such analogy would lead to interesting follow-up questions, e.g. how exactly does one measure inaccuracy of a historical narrative?
However, after reading Moldbug’s post I see that he does not try to make such analogy, instead he tries to appeal to intuitive thinking. I think this is not a good argument, since intuition is the ability to acquire knowledge without inference or the use of reason, therefore saying that you used your intuition to arrive at a certain conclusion is basically saying that you used “something else” (similarly to how you cannot build stuff out of nonwood) - this category does not seem specific enough to be a good explanation. Humans are able to find a lot of patterns, some of which are not meaningful. It is an interesting problem how to recognize which patterns are meaningful and which aren’t. But this applies to the whole field of history, not just Moldbug’s ideas.
One aspect of neoreactionary thought is that it relies on historical narratives instead of focusing on specific claims that could be true or false in a way that can be determined by evidence.
I don’t see how it does this any more than any other political philosophy.
It’s not true for someone who does get his beliefs by thinking about issues individually. Whether or not you call such a person having a political philosophy is another matter.
I was reading the thread about Neoreaction and remembered this old LW post from five years ago:
When I saw the question posted in the discussion, I thought it had a potential to be a good discussion topic. After all, I reasoned, there are a many thoughtful people on LessWrong who are interested in politics, history, political philosophy. There are a lot of insights to be gained from discussing interesting and difficult questions about society. And there are quite a few insightful neoreactionaries here, eg. Konkvistador and others (some of whom sadly no longer actively participate on LW). And some neoreactionary ideas are interesting and worth a look.
Despite all this, it seems that currently so many of subthreads of that thread basically turned into an unproductive flamewar. Why? Well, politics is the mind-killer, of course. What should have I expected. Nevertheless, I think it could have been avoided. I am not totally against political discussion in some threads. In fact, even many comments in that thread are good. Well, taken individually many comments are quite reasonable, for example, some of them explain a certain position and while you may agree or disagree with a stated position, you can’t say anything bad about the comment itself. However, when aggregated they do not form a productive atmosphere of the thread. While many comments are reasonable, it is suspiciously easy to sort most of them into a two group—pro- and anti- neoreaction with little middle ground who could act as (sort of) judges that could help evaluate the claims of both sides. There is suspiciously little belief updating (even on small issues) going on (maybe it is different among lurkers), which is probably a very important measure of whether discussion was actually productive (I do not claim that all LessWrong discussions are productive. A lot of them aren’t). Many people aren’t arguing in good faith. Some of them even post links to this discussion in other forums as if it is representative of LessWrong as a whole.
I am not calling for censorship or deletion of certain comments. Nor I want discussing controversial issues being prohibited. I am calling for a moment of reflection about mind-killing. For a moment of consideration about whether you yourself aren’t mind-killed, whether you yourself are no longer updating your beliefs honestly, whether you are no longer arguing in a good faith (even if the other side lowered its standards first). I don’t know what ritual could be devised to reinforce this point. Maybe it is true that there is only three stable equilibria points (Vladimir_M, alas, no longer comments on LessWrong). Is it possible to devise something, I don’t know, maybe a ritual or social norm or yet something else that would help keep things in an unstable point of both having the broad scope of questions and the high quality discussion? Or is any such attempt doomed to crumble due to the influence of discussion standards of the outside world?
In addition to that, I think that the question itself was poorly worded. It was way too broad. The questions that are likely to be polarizing would benefit from being much more narrow and much more specific. Maybe this way everyone would be talking about the same thing, as it would be much harder to try to steer the discussion into things that you like to talk about. Maybe this way everyone would have clearer and more concrete idea about what everyone else is talking about, making it easier to reason about the whole situation, easier to weigh the evidence in favour of one or the other position.
I agree it would be good to find a better method to debate politics. Maybe we should have a meta-rule that anyone who starts a political debate must specify rules how the topic should be debated. (So now the burden is on the people who want to debate politics here.)
It seems to me that in political topics most of updating happens between the conversation. It’s not like you say something and the other person is “oh, you totally convinced me, I am changing my mind now”. Instead, you say something, the other person looks at you very suspiciously and walks away. Later they keep thinking about it, maybe google some data, maybe talk with other people, and the next time you meet them, their position is different from the last time.
For example, I have updated, from mildly pro-NR to anti-NR. I admit they have a few good points. But this is generally my experience with political movements: they are often very good at pointing out the obvious flaws of their competitors; the problem is that their own case is usually not much better, only different. I appreciate the few insights, they made me update, and I still keep thinking about some stuff. I just didn’t come to the same conclusion; I separated the stuff that makes sense to me from the stuff that doesn’t. Just like I try to draw good ideas e.g. from religion, without becoming religious. Instead of buying the whole package, I take a few bricks and add them to my model of the world. There are a few bricks in my model now that an outside observer could call “neoreactionary”, although that would probably depend on the exact words I would use to describe them (because they are not unique for NR). The other bricks I have judged separately, and I was unimpressed. That’s where I am now.
There is also this irritating fact that NRs keep associating themselves with LW. I consider that a huge dishonesty and in a way an attack on this community. If people are impressed by LW, this can make them more open towards NR. If people are disgusted by NR, this can make them dislike LW by association. They gain, we lose. It never goes the other way round; no one is going to debate overcoming their cognitive biases just because they fell in love with NR. To put it bluntly, we are used as a recruitment tool for some guy’s cult, and all his shit falls on our heads. Why should we tolerate that? (This, especially #1 should be a required reading for every nerd.) That alone makes me completely unwilling to debate with them, because such debates are then used as further evidence that “LW supports NR”. (As an analogy, imagine how much would you want to have a polite debate with a politician you dislike, if you know that the reason he debates with you is that he can take a photo of you two having a conversation, put it on his personal webpage, and claim that you are one of his supporters, to impress people who know you.) I refuse to ignore this context, because I am strongly convinced that NRs are fully aware of what they are doing here.
So even if we try having rational debates about politics, I would prefer to try them on some other political topics.
I think this is a great suggestion, since it allows different standards for different types of political discussion, as well as giving us a chance to actually observe which set of rules leads to most productive discussion.
Well, I think this is probably true in my experience. On the other hand, since this is an internet forum, no one is forcing them to post their answers immediately. Maybe for most people it takes months to change their position on significant political belief even if they have a lot of evidence that contradict that belief, thus we do not expect that given a person he/she would change their beliefs after a conversation. However, thinking at the margin, there must have been people who were on the fence. There must have been people who quickly jump from one set of beliefs to another one whenever someone posts an interesting essay. Maybe for them a week would have been enough to update? And since this was not a real time conversation, they could post about their update after a week and it was their pride that prevented them from doing so? However, less people seemed be on the fence than I expected, “the distribution of opinions about neoreaction” seemed bimodal. However, now that I write this, I realize that such people would have been less motivated to write their beliefs in the first place, thus they were underrepresented in the total volume of posts in that thread. Thus, it is possible that the impression of bimodality is partially an artefact of that.
It is good to hear that you have found something in that thread that you thought was worth updating on. I also agree that neoreaction is better at finding flaws of other movements (for example, I think that some trends they describe as dangerous are actually dangerous) and providing some intellectual tools for thinking about the world that can be added into one’s toolbox (I am not a neoreactionary, whether those tools accurately describe the world is a different question, to me it seems that they are at least worth thinking about, can they shed some light on things that other intellectual tools neglect?) than providing an alternative course of action, an alternative model of what kind of society is good or an alternative movement that would be worth following (it seems to me that neoreaction is more like, well, reaction to progressivism (in neoreactionary sense of the word) rather than coherent set of goals in itself (it seems to me that the groups that compose neoreaction are as different from each other as either of them is from progressivism)). So, basically, I think my position towards neoreaction is somewhat similar to yours.
This is where my intuition differs from yours. Maybe this is because I have never been to a LW meetup, nor I have ever met another person who reads LW in real life. In addition, I have never met a single neoreactionary in real life. Or maybe I simply don’t know about them, I don’t think I have ever met a single SJW in real life either. I understand that LessWrong consists of real people, but when I think about LessWrong, the mental image that comes to my mind is that of a place, abstract entity and not a community of people. Although I obviously understand that without all these people this place would not exist, the mental image of LessWrong as “a place (maybe cloudlike, maybe vaguely defined) where LW style discussion about LW topics happens (style and topics are most important part of what defines LW to me)” feels more real to me than the mental image of community of people. I do not know much about LW posters beyond what they post here or on other blogs. For example, when I first started reading LessWrong, for quite a long time I thought that Yvain and Gwern were women. Why did I think this? I don’t remember. What I’m trying to say is that I guess that the difference between our intuitions may come from the difference between how we think about these two layers (place, style of discussion, topics vs community of real people). It may be a bias on my part (i.e. I don’t know what kind of thinking leads to an optimal outcome, I am not sure how exactly such optimal outcome would look like) that I neglect the community building aspect of LessWrong, I am not sure. I haven’t disentangled my thoughts about these things in my mind yet, they are very messy. This post is partially an attempt to write down my intuitions about this (as you can see, it is not a very coherent argument), maybe it will help myself to clarify some things.
In addition to that, while an individual identity is relatively well defined (“I am me”), identity of someone who belongs (or does not belong) to a certain group is much less clearly defined and whether someone actively feels belonging to a certain group seems to depend on a situation.
What I am trying to say is that when I see neoreactionaries commenting on LessWrong, I do not perceive them as “them” if they talk in a manner that is close enough to LessWrong style about the topics that are LW topics. In this situation, I do not perceive LWers and LW neoreactionaries as distinct groups in a way that a statement about the attack on the community would make sense. In fact, in this situation, only a small part of my attention is dedicated to identity related thoughts. The situation is different when, e.g. I read someone’s comments (usually outside of LessWrong) attacking LessWrong. In this case the part of my attention that is dedicated to identity related things is much larger. In such situations, I do think of myself as someone who regularly reads LessWrong and finds it a great place with a lot of interesting people who write about their insights, when someone attacks it, my emotions create an urge to defend LessWrong. In such situations much larger part of my attention is dedicated to this, and I do start thinking in terms of who belongs to what group. But unless it is neoreactionaries who are attacking LessWrong, I usually still do not feel (I am just describing what I feel in such situations) that LW neoreactionaries (not neoreactionaries in general) are distinct group. Thus, in my case, it seems that it is conflicts and disagreements that create a sense of identity (even more than vice versa), since, as I have said, I have never participated in an offline LW community. (to be continued in the next comment)
I suspect this is the polarizing effect of politics, not something specific for LW nor specific for neoreaction. We are talking about labels, not ideas. I may agree with half of ideas of some movement, and disagree with other half of ideas, but I usually have a clear opinion about whether I want to identify with a label or not.
My mental image for LW community is more or less “people who have read the Sequences, and in general agree with them”. Yes, I am aware that in recent years many people ignore this stuff, to the degree where mentioning the Sequences is a minor faux pas. (And for a while it was a major faux pas, and some people loudly insisted that telling someone to read the Sequences is a lesswrongeese for “fuck you”. Not sure how much of that attitude actually came from the “Rational”Wiki.) That, in my opinion, is a bad thing, and it sometimes leads to reinventing the wheel in the debates. To put it shortly, it seems to me we have lost the ability to build new things, and became an online debate club. Still a high quality online debate club. Just not what I hoped for at the beginning.
LessWrong was built upon some ideas, and one of them was that “politics is the mindkiller” and that we strive to become more rational, instead of being merely clever arguers. At this moment, neoreactionaries are the group most visibly violating this rule. They strongly contribute to the destruction of the walled garden. Debating them over and over again is privileging a hypothesis; why not choose any other fringe political belief instead, or try creating a new one from scratch, or whatever?
Politics is an advanced topic for a rationalist. Before going there, one should make sure they are able to handle the easier situations first. Also, there should be some kind of feedback, some way of warning people “you have strayed from the path”. Otherwise we will only have clever arguers competing using their verbal skills. When a rationalist sympathetic to neoreaction reads the SSC neoreaction anti-faq, they should be deeply shocked and start questioning their own sanity. They should realize how much they have failed the art of rationality by not realizing most of that on their own. They should update about their own ability to form epistemically correct political opinions. Instead of inventing clever rationalizations for the already written bottom line.
In my opinion, Yvain is the most qualified person for the task of debating politics rationally, and the only obvious improvement would be to somehow find dozen different Yvains coming from different cultural backgrounds, and let them debate with each other. But one doesn’t get there by writing their bottom line first.
Did LW as a group ever have this ability? Going by the archives it seems that there were a small number (less than 10) of posters on LW who could do this. Now that they’re no longer posting regularly, new things are no longer produced here.
A reasonable case could be made that this is how NRx came to be.
If this is where NRx came from, then I am strongly reminded of the story of the dog that evolved into a bacterium. An alternative LW-like community that evolved into an aggresive political movement? Either everyone involved was an advanced hyper-genius or something went terribly wrong somewhere along the way. That’s not to say that something valuable did not result, but “mission drift” would be a very mild phrase.
As far as I can see it evolved into mostly smart people writing dense texts about political philosophy. That’s a bit different :-)
That would describe quite a few political movements, actually—it’s hardly exclusive to NRx.
Nope, political movements and political philosophy belong to different categories.
Some political movements evolve out of political philosophy texts, but not all political philosophy texts evolve into political movements.
I think that at this point it would be fair to say that a movement has developed out of NRx political philosophy.
Show me that movement in actual politics. Is any NRx-er running for office? Do they have an influential PAC? A think tank in Washington, some lobbyists, maybe?
Nah, man. Once you get to that level of politics, you’re already pozzed.
Oh, I think we’re using the phrase “political movement” in different senses. I meant something more like “group of people who define themselves as a group in terms of a relatively stable platform of shared political beliefs, which are sufficiently different from the political beliefs of any other group or movement”. Other examples might be libertarianism, anarcho-primitivism, internet social justice, etc.
I guess this is a non-standard usage, so I’m open to recommendations for a better term.
Yep, looks like we are using different terminology. The distinction between political philosophy and political movement that I drew is precisely the difference between staying in the ideas/information/talking/discussing realm and moving out into the realm of real-world power and power relationships. What matches your definition I’d probably call a line of political thought.
Mencius Moldbug is a political philosopher. Tea Party is a political movement.
Sentiments like this are, in my opinion, a large part of why “politics is the mind-killer.” I am no neoreactionary, but I thought the SSC neoreaction anti-faq was extremely weak. You obviously thought it was extremely strong. We have parsed the same arguments and the same data, yet come out with diametrically opposed conclusions. That’s not how it’s supposed to work. And this is far from a unique occurrence. I frequently find the same article or post being held up as brilliant by people on one side of the political spectrum, and dishonest or idiotic by people on the other side.
It is not merely that people don’t agree on what’s correct, we don’t even agree on what a successful argument looks like.
Well, sometimes that’s exactly how it’s supposed to work.
For example, if you have high confidence in additional information which contradicts the premises of the document in whole or in part, and VB is not confident in that information, then we’d expect you to judge the document less compelling than VB. And if you wished to make a compelling argument that you were justified in that judgment, you could lay out the relevant information.
Or if you’ve performed a more insightful analysis of the document than VB has, such that you’ve identified rhetorical sleight-of-hand in the document that tricks VB into accepting certain lines of reasoning as sound when they actually aren’t, or as supporting certain conclusions when they actually don’t, or something of that nature, here again we’d expect you to judge the document less compelling than VB does, and you could lay out the fallacious reasoning step-by-step if you wished to make a compelling argument that you were justified in that judgment.
Do you believe either of those are the case?
I don’t want to focus on the anti-neoreactionary FAQ, because I don’t want to get this dragged into a debate about neoreaction. In particular I simply don’t know how Viliam_Bur parsed the document, what additional information one of us is privy to that the other is not. My point is that this is a general issue in politics, where one group of people finds a piece compelling, and another group finds a piece terrible.
And note too that this isn’t experienced as something emotional or personal, but rather as a general argument for the truth. In this case, VB thinks neo-reactionaries should be “deeply shocked and start questioning their own sanity.” In other words, he thinks this is basically a settled argument, and implies that people who persist in their neoreaction are basically irrational, crazy or something along those lines. Again, this is a general issue in politics. People generally believe (or at least, talk like they believe) that people who disagree with them politically are clinging to refuted beliefs in the face of overwhelming evidence. I don’t just think this is due to epistemic closure, although that is part of it. I think it’s partly an emotional and cultural thing, where we are moved for pre-rational reasons but our minds represent this to us as truth.
I am certainly not saying I am immune from this, but I don’t have the third-party view on myself. I am not saying I am right and Viliam_Bur is wrong on the case in point. But I do wonder how many neoreactionaries have been deconverted by that FAQ. I suspect the number is very low...
To the extent that you’re making a general point—which, if I’ve understood you correctly, is that human intuitions of truth are significantly influenced by emotional and cultural factors, including political (and more broadly tribal) affiliations—I agree with your general point.
And if I’ve understood you correctly, despite the fact that most of your specific claims in this thread are about a specific ideology and a specific document, you don’t actually want to discuss those things. So I won’t.
I’m happy to discuss specifics, just not about the neo reactionary FAQ. I agree with VB that LW has an unhealthy tendency that every discussion becomes about neo reaction, and I don’t like it.
Instead, how about this article. Jim Edwards is a bright guy, and he clearly intended to persuade with that post. And indeed he has plenty of commenters who think he was making a valuable point. Yet I am at a loss to say what it is. Here he is, claiming to have a graph showing that government spending affects economic growth, yet all that graph shows is changes in government spending. It doesn’t show a correlation, it doesn’t suggest causation, it doesn’t do anything of the sort. Yet some poeople find this persuasive.
When someone says they like dance music (for example), I feel like I’m missing out; they get joy out of something I hate, which in some ways makes them better than me, but fundamentally de gusts us non set disputandum. The older I get, the more I feel like that’s how all persuasion works.
Yup, those charts puzzle me, too (based on about five seconds of analysis, admittedly, but I have a strong preexisting belief that there are many examples of such silliness on the Internet, so I’m strongly inclined to agree that this particular chart is yet another example… which is of course yet another example of the kind of judgment-based-on-non-analytical factors we’re discussing).
How confident are you that this is how all persuasion works?
I don’t know how general this is, but I do think it’s an important factor that I don’t see discussed.
Another point is peer effects. I remember at school my physics teacher used to use proof by intimidation where he would attempt to browbeat and ridicule students into agreeing with him on some subtly incorrect argument. And he wouldn’t just get agreement because he scared people, the force of his personality and the desire to not look foolish would genuinely convince them. And then he’d get cross for real, saying no, you need to stand up for yourself, think through the maths. But if you can’t fully think through the soundness of the arguments, if you are groping around both on the correct and the incorrect answer, then you will be swayed by these social effects. I think a lot of persuasion works like that, but on a more subtle and long-term level.
Yes, I agree.
That’s kinda a general issue in humans and usually goes by the name of Confirmation Bias.
For example, debates about religion or, say, global warming work in exactly the same way.
But I don’t think it’s just confirmation bias. People do get won over by arguments. People do change their minds, convert, etc. And often after changing their mind they become just as passionate for their new cause as they ever were for the old. But what is persuasive and what is logical sometimes seem disjoint to different people.
You are right that these things afflict some areas more than others. Politics and religion are notoriously bad. And I do think a large part of it is that people simply have very different standards for what a successful argument looks like, and that this is almost an aesthetic.
Sure, confirmation bias is a force but it’s not an insurmountable force. It only makes changing one’s beliefs difficult, but not impossible.
I agree and I don’t find this surprising. People are different and that’s fine.
Take the classic “Won’t somebody please think of the children!” argument. I, for example, find it deeply suspect to the extent that it works as an anti-argument for me. But not an inconsiderate number of people can be convinced by this (and, in general, by emotional-appeal strategies).
I guess what kind of people are convinced by what kind of arguments would be an interesting area to research.
This is an interesting question that seems empirically testable—we could ask those people and make a poll. Although there is a difference between “believing that NRs are probably right about most things” and “self-identifying as NR”. I would guess there were many people impressed (but not yet completely convinced) by NR without accepting the label (yet?), who were less impressed after reading the FAQ. So the losses among potential NRs were probably much higher than among already fully convinced NRs.
That’s a warning sign, not a barbed-wire fence patrolled by guards with orders to shoot to kill.
Neoreaction is an interesting line of thought offering unusual—and so valuable—insights. If you don’t want to talk about NRx, well, don’t. If you want to talk about different political beliefs, well, do.
What is “the path”? LW is a diverse community and that’s one of its strengths.
You did mention mindkill, didn’t you? I recommend a look in the mirror. In particular, you seem to be confusing rationality with a particular set of political values.
Political opinions are expressions of values. Values are not epistemically correct or wrong—that’s a category error.
Talking about “neoreaction” (or any other political group) already is a package-deal fallacy. NRs have a set of beliefs. Each of those beliefs individually can be true or false (or disconnected from evidence). These beliefs should be debated individually. It is quite possible that within the set, some beliefs will be true, some will be false, and some will be undefined. Then we can accept the true beliefs, and reject the false beliefs. There is no need to use the word “neoreaction” anywhere in that process.
So, instead of having threads about neoreaction, we (assuming we are going to debate politics) should have threads about each individual belief (only one such thread at a time). Then we should provide evidence for the belief or against the belief. Then we should judge the evidence, and come to a conclusion, unconstrained by identity labels.
The fact that we are not already doing it this way, is for me an evidence on the meta level that we are not ready for having political debates.
Debating beliefs separately, understanding the conjuction fallacy, providing evidence, avoiding labels, tabooing words… this is all rationality 101 stuff. This is “the path” we have already strayed from. If we collectively fail at rationality 101, I don’t trust our ability to debate more complex things.
Value is “I don’t want children to starve”. Political opinion is “we should increase the minimal wage (so the children will not starve)”. There is more than the value; there is also the model of the world saying that “increasing minimal wage will reduce the number of starving children (without significant conflict with other values)”. Other person may share the value, but reject the model. They may instead have a model that “increasing minimal wages increases unemployment, and thus increases the number of starving children”, and therefore have a political opinion “we should remove minimal wage (so the children will not starve)”. Same value, different models, different political opinions.
It seems to me that people usually differ more in their models than in their values. There are probably few people who really want to optimize the world to increase the number of starving children, but there are many people with political opinions contradicting each other. (Believing too quickly that our political opponents have different values is also covered in the Sequences.)
I don’t think it’s quite that simple.
You are arguing for atomicity of beliefs as well as their independence—you are saying they can (and should) stand and fall on their own. I think the situation is more complicated—the beliefs form a network and accepting or rejecting a particular node sends ripples through the whole network.
Beliefs can support and reinforce each other, they can depend on one another. Some foundational beliefs are so important to the whole network that rejecting them collapses the whole thing. Consider e.g. Christianity—a particular network of beliefs. Some can stand or fall on their own—the proliferation of varieties of Christianity attests to that—but some beliefs support large sub-networks and if you tear them down, the rest falls, too. At the root, if you reject the belief in God, debating, for example, the existence of purgatory is silly.
The package-deal fallacy exists and is real, but excessive reductionism is a fallacy, too, and just as real.
Oh, I don’t trust our ability to debate complex things. But debate them we must, because the alternative is much worse. That ability is not a binary flag, by the way.
True, and these should be separated to the extent possible.
I don’t know about that—I’d like to see more evidence. One of the problems is that people may seem to have the same values at the level of costless declarations (everyone is for motherhood and apple pie), but once the same people are forced to make costly trade-offs between things important to them, the real values come out and I am not sure that they would be as similar as they looked before.
I wish I could give this more than one upvote.
Maybe. It seems to me that there could be two systems of political ideas—call them A and B—both of which are pretty credible when taken as wholes, but for which if you take any single proposition from one and examine it in the context of the other, it’s obviously wrong.
(The same thing happens with scientific theories. Key words: “Quine-Duhem thesis”.)
On the other hand, it does also happen that basically-unrelated ideas get bundled together as part of a package deal, and in that case we probably do generally want to try to separate them. So I’m not sure what the best way is to make the tradeoff between splitting and lumping.
(cont.) I guess it is likely true that more people find about neoreaction on LessWrong than vice versa. However, it is not obvious to me that hardly anyone would even join LessWrong to discuss LW topics after being exposed to neoreaction first. I mean, MoreRight recommends its readers to read LessWrong and SlateStarCodex as well. Xenosystems has LW, SSC and OvercomingBias on its blogroll. Radish Magazine’s list of the people they admire includes Eliezer Yudkowsky. Obviously some of those people were LWers themselves, some might post links to LessWrong because they try to make their place more comfortable to LWers who might wander there. But still, I hope that at least some neoreactionaries would come here with an interest in LW topics (cognitive biases, future of humanity and artificial intelligence). I guess that it is probably true that neoreaction gains more members from LW than vice versa. This is the community layer. But there is also the intellectual toolbox layer. And I think that if LessWrong started discussing politics, having a few neoreactionaries (not just any neoreactionaries, but those who think in LW terms, are able to notice cognitive biases) here would probably be beneficial. And I guess that if we are to overcome biases we will have to deal with politics. You see, I fear that by paying attention only to cognitive biases that are easy to test in the lab we are like a proverbial drunk man searching for his keys under a lamp-post. For example, field of cognitive biases researches what happens inside a person’s head, but certain things from political science and economics such as median voter’s theorem, Duverger’s law and Motte-and-Bailey effect (when instead of happening inside a person’s head, it happens to a movement—when different people from the same movement occupy motte and bailey (I think that individual and group motte-and-bailey’s are quite distinct)), seems to be analogous enough so as to be thought of as yet another kind of biases that prevent from optimal decision making at a group level. And if we were to start discussion about things like these, it would be hard to avoid using political examples altogether. By the way, the idea that the list of biases LessWrong usually talks about is not exhaustive enough has already been discussed here.
Yes, definitely. I think that the political topics (especially at the beginning) would have to be much more specific and less related to the questions of identity.
This could just as easily be described, with the opposite connotation, as the movement containing some weakmans*, which makes me think that we need a better way of talking about this phenomenon. ‘Palatability spread’ or ‘presentability spread’? But that isn’t quite right. A hybrid term like ‘mottemans’ and ‘baileymans’ would be the worst thing ever. Perhaps we need a new metaphor, such as the movement being a large object where some parts are closer to you, and some parts are further away, and they all have some unifying qualities, and it is usually more productive to argue against the part that is closer to you rather than the part that is far away, even though focusing on the part that is far away makes it easy to other the whole edifice (weakmanning); and motte-and-baileying is ascribing to the further-away part of your own movement but pretending that you are part of the closer part.
*in the technical sense; their positions may be plenty strong but they are less palatable
Edit: Whoops no one will see this because it’s in an old open thread. Oh well.
What I had in mind was a situation when “a person from outside” talks to a person who “occupies a bailey of the movement” (for the sake of simplicity let’s call them “a movement”, although it doesn’t have to be a movement in a traditional sense). If the former notices that the position of the latter one is weakly supported, then the latter appeals not to the motte position itself, but to the existence of high status people who occupy motte position, e.g. “our movement has a lot of academic researchers on our side” or something along those lines, even though the position of the said person doesn’t necessarily resemble that of the “motte people” beyond a few aspects, therefore “a person from outside” should not criticize their movement. In other words, a criticism against a particular position is interpreted to be a criticism against the whole movement and “motte people”, thus they invoke “a strongman” do deflect the criticism from themselves.
I think you made a very good point. From the inside, if an outsider criticizes a certain position of the movement, it looks as if they attacked a weakman of the movement and since it feels like they attacked a movement itself, an insider of the movement feels that they should present a stronger case for the movement, because allowing an outsider to debate weakmen without having to debate stronger positions could give the said outsider and other observers an impression that these weakmen was what the movement was all about. However, from the said outsider’s perspective it looks like they criticized a particular position of a movement, but then (due to solidarity or something similar) the movement’s strongmen were fielded against them, and from the outsider’s perspective it does look like that the movement pulled a move that looks very similar to a motte-and-bailey.
I think that replying to old comments should be encouraged. Because otherwise if everyone feels that they should reply as quickly as possible (or otherwise not reply at all), they will not think their positions through and post them in a hurry.
Um, this is a horrible idea. The problem is people will make rules that amount to “you’re only allowed to debate this topic if you agree with me”.
One aspect of neoreactionary thought is that it relies on historical narratives instead of focusing on specific claims that could be true or false in a way that can be determined by evidence.
To quote Moldbug:
Given such an idea of how reasoning works, it’s not clear that there an easy solution that allows for agreeing on a social norm to discuss politics.
It isn’t clear to me that this sort of thought should be called “reasoning”, a term which is commonly used for dealing with propositions that do have truth-values, at all.
It seems to me to be more in the vein of “poetry” or “poetry appreciation”.
I don’t think that’s entirely fair to Moldbug. Illustrating patterns and using the human ability for pattern matching does have it’s place in knowledge generation. It’s more than just poetry appreciation.
After reading the quote I thought that he was trying to make an analogy between finding a historical narrative from historical facts and drawing a curve that has the best fit to a given series of data points. Indeed, saying that such curve is “true” or “false” does not make a lot sense, since just because a point lies outside the graph of a function does not mean that this function cannot be a curve of best fit—one cannot decide that from a small number of data points, one needs to measure (in)accuracy of the model over the whole domain. Such analogy would lead to interesting follow-up questions, e.g. how exactly does one measure inaccuracy of a historical narrative?
However, after reading Moldbug’s post I see that he does not try to make such analogy, instead he tries to appeal to intuitive thinking. I think this is not a good argument, since intuition is the ability to acquire knowledge without inference or the use of reason, therefore saying that you used your intuition to arrive at a certain conclusion is basically saying that you used “something else” (similarly to how you cannot build stuff out of nonwood) - this category does not seem specific enough to be a good explanation. Humans are able to find a lot of patterns, some of which are not meaningful. It is an interesting problem how to recognize which patterns are meaningful and which aren’t. But this applies to the whole field of history, not just Moldbug’s ideas.
I don’t see how it does this any more than any other political philosophy.
It’s not true for someone who does get his beliefs by thinking about issues individually. Whether or not you call such a person having a political philosophy is another matter.