Ok, come to think of it, LWers have a higher IQ than the general population. That’s kind of not in question.
But this article was basically saying that rationalism is unpopular because there’s an IQ barrier. Most people with high IQ’s aren’t rational either, though. You can’t say “we have a rationalist worldview because we’re smart* if most smart people don’t.
On political correctness—it’s relevant in almost every discussion. I’d only cut it out entirely in cases when something serious was staked on my clear judgment and honest communication. If you’re on the crew of a sinking ship, and you think you know how to save it … then yeah, you’d better cut the crap. But almost every other conversation also has a social purpose. Signaling isn’t something you just turn off except on the rare occasions you meet a mundane.
If you’re on the crew of a sinking ship, and you think you know how to save it … then yeah, you’d better cut the crap. But almost every other conversation also has a social purpose. Signaling isn’t something you just turn off except on the rare occasions you meet a mundane.
So by default, talk on LW should include some fraction of statements whose purpose is to signal political correctness ? It seems to me that PC is a great way to make yourself irrational and horribly biased. Good for social signalling, useless for the martial art of rationality. Kind of like coming to the Dojo wearing high heels.
Some people here would say we are on a sinking ship (spaceship earth) and that this discussion is explicitly about how to save the ship by making the crew more sane. ;-0 But that may be stretching the metaphor.
Well, put it this way: I think that saying “the reason most people don’t agree with us is that they’re just not smart enough” is a bit of a jerk thing to say. It doesn’t represent us well and it’s not nice. If you’re absolutely convinced this is true, (and I think there’s not enough evidence for that), you should be much more circumspect in how you say it. Yes, I want to be pro-nice and anti-jerk.
LW does not behave like the crew of a sinking ship. As a matter of observation, it just doesn’t function that way. It’s partly a social or discussion forum.
“the reason most people don’t agree with us is that they’re just not smart enough” is a bit of a jerk thing to say. …. If you’re absolutely convinced this is true
Why do I have to be “absolutely convinced”? Can’t I give it a 90% credence and still say it? Or are we playing anti-epistemology and applying higher standards of evidence to statements that we find emotionally uncomfortable? Geez I leave for a few months and come back to find TEXTBOOK EXAMPLES of irrationality passing for LW debate! ;-)
I basically posted without thinking too carefully; I knew I didn’t like something about Alexandros’ post. (Now that he’s rephrased it it’s starting to sound more plausible.) And I’m sorry if I came on too strong or insulted him personally.
Straight-up rational epistemology, where you’re only concerned with truth, is … a little unnatural to me, I have to admit. Doing it all the time (as opposed to just on special occasions when you heroically overcome your bias) would be a very different life.
Upvoted for honesty (we need more of this kind of thing I think)
I think you have a good point about image of LW—we must be careful to present ourselves, that is I think something I tend to forget, there is, in fact a need for good image, as well as for good rationality
Re. “the reason most people don’t agree with us is that they’re just not smart enough”...totally aside from the question of whether this sort of sentiment is liable to be offputting to a lot of people, I’ve very often wondered whether anyone who holds such a sentiment is at all worried about the consequences of an “Emperor’s New Clothes” effect.
What I mean by “Emperor’s New Clothes” effect is that, regardless of what a person’s actual views are on a given subject (or set of subjects), there’s really nothing stopping said person from just copying the favored vocabulary, language patterns, stated opinions, etc., of those they see as the cleverest/most prominent/most respectable members of a community they want to join and be accepted in.
E.g., in self-described “rationalist” communities, I’ve noted that lots of people involved (a) value intelligence (however they define it) highly, and (b) appear to enjoy being acknowledged as clever themselves. The easiest way to do this, of course, is to parrot others that the community of interest clearly thinks are the Smartest of the Smart. And in some situations I suspect the “parroting” can occur involuntarily, just as a result of reading a lot of the writing of someone you like, admire, or respect intellectually, even if you may not have any real, deep understanding of what you are saying.
So my question is...does anyone even care about this possibility? Or are “communities” largely in the business of collecting members and advocates who can talk the talk, regardless of what their brains are actually doing behind the scenes?
For my own part: if hordes of people who aren’t really rationalists start adopting, for purely signaling reasons, the trappings of epistemic hygiene… if they start providing arguments in defense of their positions and admitting when those arguments are shown to be wrong, for example, not because of a genuine desire for the truth but merely because of a desire to be seen that way… if they start reliably articulating their biases and identifying the operation of biases in others, merely because that’s the social norm… if they start tagging their assertions with confidence indicators and using those indicators consistently without actually having a deep-rooted commitment to avoiding implicitly overstating or understating their confidence… and so on and so forth…
...well, actually, I’d pretty much call that an unadulterated win. Sign me up for that future, please.
OTOH, if hordes of people just start talking about how smart and rational they are and how that makes them better than ordinary people, well, that’s not worth much to me.
I think at this point I should clarify that the article didn’t (intend to) say “the reason most people don’t agree with us is that they’re just not smart enough” but rather that for people without the capability to contribute to cutting edge maths, science, AI research and the like, our worldview is not that exciting and may therefore refuse to be convinced. Notice that it implies possible irrationality for many of the people who have joined thus far, as they commonly belong to the classes that our worldview values to a large extent. This includes myself.
Is it a nice thing to say? I personally do not feel comfortable bringing this stuff up and would prefer if things were differently. Perhaps the fact that I feel this way made this thought stand out as more urgent to discuss than many others that I have not bothered to post. In any case, this is the reality I perceive, and I’ve tried to be as inoffensive as possible while at the same time phrasing a coherent point. If anyone else is capable of expressing the core of this message in a less divisive way, they’re welcome to do so.
It seems to me that whether LW should include signaling statements or not, it clearly does. Changing that will require some pretty heroic efforts.
But I’ve only been here a few weeks. You’ve been around and actively engaged for a while (albeit on what seems like it was an abrupt hiatus), so I’m interested in your judgment here.
To try and get a little more concrete: suppose you had to classify all the posts and comments on LW as +/- MAR (useful/useless for the martial art of rationality) and +/- SSA (useful/useless to signal social affiliations… including but not limited to “PC” in the mainstream sense).
What ratio of +MAR:-MAR do you think you’d end up with?
What ratio of +SSA:-SSA?
What ratio of +MAR:+SSA?
Are there easily identifiable subsets of posts for which you think you’d end up with ratios importantly different than those? (For example, posts by particular authors, top-level posts vs. comments, or whatever.)
What ratios would you expect from a community that was actively trying to “save a sinking ship”?
To be more concrete about value for “Martial Art of Rationality” I think one would need a scalar measure, rather than a Yes/No. If you chose a Yes/No measure, your results would be very much dependent on where you drew the line. To answer the question you are asking in terms of ratios of +MAR:+SSA is really to talk about the distribution of posts in terms of rationality versus signalling.
I think that LW contains a few very very good posters, a cadre of very good posters, a few oddballs and a sea of people who sorta-kinda understand some stuff. And in many cases, it does contain posts and comments which shun rationality in order to thump the table in favour of some particular ideology or just general political correctness, which I have run foul of once or twice.
The problem, as I see it, is that if ideology gets to rule the roost on a whole host of important topics, then for those topics, LW becomes just another irrational, tribal internet echo chamber, where affirming the great chosen ideology becomes the most important task. However, I have been convinced that the benefit of having LW existing AT ALL outweighs this cost. To be plain, I think that the LW consensus is actually (factually) wrong about many things that real people deal with in the real world, but this is outweighed by the fact that LW is the only place in the world concerned with rationality.
And in many cases, it does contain posts and comments which shun rationality in order to thump the table in favour of some particular ideology or just general political correctness, which I have run foul of once or twice.
So what you are saying here is that ideology is always irrational? Since this is a community blog devoted to refining rationality, why don’t you address the particular points you believe do constitute the irrational consensus? Or are you saying that some individuals here know that their ideology is irrational yet shun rationality in favor of it? That could be better phrased as there are people here who follow selfish goals and argue based on matters of taste. But how do you know that those people are aware of it, that they do not honestly believe that their disguised ideology is actually rationality?
I would love to know which kind of posts and comments, and in particular what consensus, you are referring to. This is very important to me, so if you don’t want to make it public I would like you to send me a short private message.
I’m not sure I like the dichotomy between “the common man” and “us.”
AND
saying “the reason most people don’t agree with us is that they’re just not smart enough” is a bit of a jerk thing to say.
are examples of the kind of thing that I would regard as the problem. Other examples are even more inflammatory, but basically the all boil down to:
Person1: X is a true fact about the world
Person2: But saying X is mean to {political correctness brownie points group Y}, and besides, you can’t be absolutely sure it’s true/I won’t believe it until you provide an impossibly high degree of evidence/we should stop talking about it or people will think we are mean!
The result is typically that LW can recite lots of rationalist principles, but when it comes to applying them to a significant number of real-world problems, LW is clueless.
Now one might reasonably argue that we don’t need to be right about everything. Sure, LW is mired in PC BS about X,Y and Z, but topics A,B,C,D,E, … which are also important are not subject to PC irrationality pressure. To an extent I buy this argument. However, reality is not a disconnected series of isolated topics: if you’re wrong about X,Y and Z you might make incorrect inferences about all sorts of other things.
For some of us, being perceived as nice is one of the most important ways we can help ourselves in real life. And the best way to be consistently perceived as nice is to be constantly concerned with the niceness of the statements one makes, and seriously try to avoid giving offense. If I became blunt and plain-spoken it would hurt me in real life. It would not be worth it to me. Except in very rare situations (such as if I’m personally responsible for saving lives, and I have to be deliberately rude to do it.)
In the interests of rationality, I’ll refrain in future from criticizing un-PC statements because they’re “not nice.” I don’t want to confuse anyone. But I can’t make those statements myself—that comes at a cost I won’t pay.
Do you ordinarily find that you have difficulty maintaining different registers(1) for different contexts?
If so, I sympathize and wish you luck in overcoming that difficulty. It is an enormously useful skill: as you say, being perceived as nice is valuable, and different communities perceive different kinds of behavior as nice, so you do best to learn to signal appropriately for different contexts (2).
If you don’t have difficulty with this, though, then your comment puzzles me. If you agree with FKARoko that LW norms support “un-PC” (3) posts, or in any event ought to, then what cost are you concerned about… what’s the cost? Conversely, if you don’t agree with him, why refrain from criticizing un-PC statements… what confusion?
(2) Unless, of course, you spend all your time in only one community.
(3) Caveat: I don’t really understand what “PC” means; I’m using the term because it’s the term you and FKARoko both use. I gather you use it here as synonymous with “nice,” although in my own experience niceness often has more to do with how a statement is framed than what is actually being said.
For clarity’s sake: PC means politically correct and usually refers to political inoffensiveness. The term isn’t really apt for the current discussion because there was no talk of politics.
I don’t know if I’m great at code-switching. I can tell that LW is “not PC” or blunt-spoken. But the thing is, when some heuristic is good for you in most of your life, you may internalize it and simply make it a constant feature of your personality. For example, if it’s usually a bad idea for you to use swear words, you may be better off just not swearing at all, even when you’re in the saloon and swearing would be socially appropriate. You may want to personally identify as a non-curser. It makes double-sure that you’ll never swear at the wrong time.
If you don’t trust yourself to be socially agile in switching from situation to situation, then I think “better safe than sorry” makes sense.
If you don’t trust yourself to be socially agile in switching from situation to situation, then I think “better safe than sorry” makes sense.
Agreed as far as it goes. But that’s a big “if.”
The social agility you’re talking about is an important life skill. If I spend some time in contexts where a particular behavior has social benefits and some time in contexts where the same behavior has social costs, then I get the best results by staying aware of the context that I’m in and behaving appropriately.
That said, I do appreciate that it’s harder for some people than others. If I can’t do that, the next-best thing is to construct a superposition of rulesets and always apply it. This is similar to what you’re suggesting here… if the costs of cursing in the no-curse environments are much higher than the benefits of cursing in the yes-curse environments, adopting a “don’t curse regardless of context” rule as you suggest can work OK.
My point is, it’s a second-best option. Paying attention to my environment as it changes and responding accordingly has better payoffs, if I can manage it.
being perceived as nice is one of the most important ways we can help ourselves in real life
not just for some, for all of us. It is in everyone’s narrow self interest to sacrifice epistemology for signalling purposes. And in real life one has to do that. But here at least, I think that we should establish the opposite norm.
Look, what is the point of you trying to appear PC on LW? I for one am just not impressed. I already know that you’re from a certain demographic that implies lots of good things about you. But it implies bad things about you if you can’t turn off the the signalling BS in a context where it is socially very harmful, I.e. A rationality website.
Throughout the sequences it has been made clear that there usually is some local incentive for motivated cognition. Wanting to appear PC is no different: it’s just another reason that people have for blowing their thought process up, with all the usual downsides, e.g. The downside that you often simply don’t know what the cost will be because you would only be able to compute the cost of the motivated cognition if you were not engaging in it. Suffice it to say that I think we should have very strong norms against motivated cognition here on LW.
As I see it, the problem there is that saying “we shouldn’t be affected by this stuff” does not mean that we aren’t affected by this stuff. Knowing your cognitive biases allows for workarounds—it doesn’t cause them not to exist.
In particular, saying to others “you’re smart people, you should not be affected by such nuances” and then not bothering to put them into place oneself is almost a cliched way to come across as an arsehole on the Internet and have people not want to bother listening to the speaker, no matter how right they may be. The message communicated is not “you should be affected less”, but “I am inept.” This reduces one’s effectiveness.
Postel’s law: “Be conservative in what you send; be liberal in what you accept.”
If someone posts like a raging arsehole, they can be as right as they like, but people still won’t welcome them or want to listen to them. It’s not as effective a communication strategy as thinking before typing: your aim is to get the effect you want, not to win the conversation.
I speak here as a (hopefully) recovering arsehole. I have no plans to compromise the accuracy of what I’m saying, but it is useful to say it in a way that doesn’t repel people from even reading.
I know you’re not impressed. I know folks around here don’t like it much. I’m glad there are such folks who say what they think without signaling. I respect that attitude, and it’s partly because I respect it that I’m here. I do want to know what people think when they’re solely concerned with accuracy. But I don’t really want to imitate them—maybe a little, but not thoroughly.
Truth is, I used to be socially awkward. These days, I’m not, but it’s not because I’m any cleverer at dealing with people, it’s because I’ve adopted a persona that’s all about being, let’s say, harmless. Positive and gentle. Trying to please. It’s kind of a good all-purpose heuristic—if I make some kind of faux pas, people will think “oh, she’s clueless, but she’s nice.” I’m good with nice-but-clueless.
And if you really want to be 100% nice-but-clueless, you have to be that way all the time. It’s not just political PC—I make a deliberate point of, as much as possible, never thinking or speaking badly of anyone. Not even in private. Not even in forums where the opposite norm holds. Once you start down that path, there’s a chance that you might be bitchy in public. And you can’t really afford that if you have other flaws and weaknesses, I think; I need people to forgive me my mistakes.
Would it be worth it to change? As you point out, I can’t know, because I’m within the world of motivated cognition. That said, I can think of circumstances where I probably ought to change—if I worked in the private sector, for example, or if I chose an advisor who really values frankness (both live possibilities.) There may come a point where “nice but clueless” stops working for me. And then I’ll really have to take this stuff seriously. But I have no idea who I’ll be, once I’m not nice-but-clueless.
A couple of times here, I’ve run into guys who find nice really annoying. It was useful for me to be a good bit blunter than unusual with them, and I’m inclined to believe that the experiment in flexibility was good for me.
The problem, I think, is that nice involves such a light touch that for some people, it fails to make contact.
I like nice. I prefer nice. And I think it’s got some very definite limits.
I make a deliberate point of, as much as possible, never thinking or speaking badly of anyone. Not even in private
Then I think that you are in grave danger of getting pretty badly screwed by someone. There are genuinely bad people in this world, and there are lots of kind-of-bad people who will screw you over and rationalize it somehow. You have to have a healthy skepticism (not paranoia) about people’s motives, it’s the only way to prevent someone taking your money or your job/house etc. Seriously, forget the darned debate: if what you say is true, you are probably in serious danger of being taken advantage of in some way.
If I were you, I would seriously consider trying to improve your social skills the hard way, i.e. by learning social skills, and not engaging in potentially massively self-harming motivated cognition.
Ah… I should have read this before replying to what you said elsewhere.
So, you’re aware that presenting as “nice but clueless” works against you in communities where cluelessness isn’t a point in your favor, but you prefer to optimize for the communities where it is.
I doubt you’ll ever “have to,” in the sense of being forced to by circumstances. That’s what I meant by it being your choice to make.
Plenty of people live their entire lives optimizing for minimizing social friction at the cost of expressing their thoughts clearly and unambiguously… presenting as “nice but clueless,” in other words. “Going along to get along” is another way to say it. I suspect that as long as you make the choice to do so, you will be able to find situations that allow you to, just like they do.
That’s what value judgments are for, after all: they let you construct a preference order among possible states of the world, and therefore drive the choices you make. The decision to present as “nice but clueless” will affect the sorts of acquaintances you make, the sorts of communities you join, the sorts of organizations you work for, and so forth.
To put it differently: like it or not, you actually have a lot of power over your own future.
So the question is, how confident are you in the preference order you’re defending?
If you’re confident in it, then great… you’re choosing the world you want, which is as it should be, and I wish you joy of it.
OTOH, if you are uncertain, then I suggest that you might do better to explore the roots of that uncertainty yourself, rather than wait for events to somehow force you to change your mind.
I like that attitude. It is also not irrational because you are aware of it and deliberately choose to be that way. I believe that Less Wrong features a way too much ought. I don’t disagree with the consensus on Cryonics at all, yet I’m not getting a contract because I’m too lazy and I like to be lazy. My usual credo is, I can’t lose as long as I don’t leave my way. That doesn’t mean I am stubborn. I allow myself to alter my way situational.
Rationality is about winning and what constitutes winning is purely subjective. If you don’t care if the universe is tiled with paperclips rather than being filled with apes having sex under the stars, that is completely rational as long as you are aware what exactly you care or don’t care about.
Do you think that your beliefs regarding what you care about could be mistaken? That you might tell yourself that you care more about being lazy than about getting cryonics done, but that in fact, under reflection, you would prefer to get the contract?
...in fact, under reflection, you would prefer to get the contract?
I can’t solve that problem right now. It implies that part of my volition is not, in fact, part of what I want or should not be part of my goals. Why would I only listen to the part of my inner self favoring long-term decisions? I could take the car to drive to that Christmas party to visit my family and friends, or I could stay home because of black ice. After all there will be many more Christmas parties without black ice in future, and even more in the far future where there will be backups? But where does this thinking lead? I want both of course. On reflection, not dying is more important than party. But on further reflection I do not have enough data that would allow me to conclude that any long-term payoff could outweigh extensive restraint at present.
There are also some practical considerations about Cryonics. I am in Germany, I don’t know of any Cryonics companies here. I don’t know what is the likelihood of being frozen quickly enough in case of accident. When I know I’m going to die in advance then I can still get a contract then. So is the money really worth it, given that most pathways to death result in no expected benefits from a Cryonics contract?
Look, what is the point of you trying to appear PC on LW?
Because this is not a private mailing list?
Imagine some scientist or politician came here to get a dose of rationality just to come across a discussion where someone argues that he knows more and then tells everyone to keep their idiot mouths shut? This happened on Less Wrong and the person who said so might have even been factually correct. Besides that this caused some uproar and damaged Less Wrong it is also a bad way of communicating truth and rationality. Stating conclusions like that is not a way to refine rationality. People do not come here to learn facts, e.g. that they are dumb, but how to arrive at such factual conclusions.
IMO if a top politician or scientist came here and found politically correct BS as the standard ideology on this so called “rational” website, they would probably sigh and close the page never to return. Why should they? They have better things to do with their time than listen to BS.
On the other hand, I don’t think they would be impressed if we didn’t have the skill to frame potentially inflammatory facts in a delicate way. I am not arguing against careful, delicate framing. I am arguing against MOTIVATED COGNITION.
I’m not suggesting that Less Wrong should conceal the truth to schmooze certain ideologies. What I am suggesting is that Less Wrong is NOT about teaching people how to score Karma points on Less Wrong but in the real world.
Less Wrong has to be able to apply rationality in a reconcilable dose rate.
Less Wrong has to keep care that it does not shut itself up in its own ivory-tower.
Less Wrong has to be focused on teaching utilizable rationality skills.
Motivated cognition can be a double-edged sword. If you overcompensate against political correctness you can easily end up pursuing an introversive self-image that leads to ingroup bias. Less Wrong has to be in an equilibrium of internal affairs and public relations.
Political correctness bias is not the cure to ingroup bias. If you have an ingroup bias problem, you solve the ingroup bias problem with the usual rationality tactics—like being honest about the weaknesses of the ingroup.
As far as I can tell, the best path is to vigorously fight PC bias and ingroup bias. You can have both. Really.
The result is typically that LW can recite lots of rationalist principles, but when it comes to applying them to a significant number of real-world problems, LW is clueless.
They got the SIAI funded.
Person1: X is a true fact about the world
Person2: But saying X is mean
The genome of the Ebola virus is a true fact about organisms. Yet it is dumb to state it on a microbiology forum. Besides, “if you don’t agree you are dumb” is a statement that has to be backed by exceptional amounts of evidence. People who already disagree can only be convinced by evidence, if they are not intelligent enough to grasp the arguments.
In the field of security engineering, a persistent flat-earth belief is ‘security by obscurity’: the doctrine that security measures should not be disclosed or even discussed.
In the seventeenth century, when Bishop Wilkins wrote the first book on cryptography in English in 1641, he felt the need to justify himself: “If all those useful Inventions that are liable to abuse, should therefore be concealed, there is not any Art or Science which might be lawfully profest”. In the nineteenth century, locksmiths objected to the publication of books on their craft; although villains already knew which locks were easy to pick, the locksmiths’ customers mostly didn’t. In the 1970s, the NSA tried to block academic research in cryptography; in the 1990s, big software firms tried to claim that proprietary software is more secure than its open-source competitors.
Yet we actually have some hard science on this. In the standard reliability growth model, it is a theorem that opening up a system helps attackers and defenders equally; there’s an empirical question whether the assumptions of this model apply to a given system, and if they don’t then there’s a further empirical question of whether open or closed is better.
Indeed, in systems software the evidence supports the view that open is better. Yet the security-industrial complex continues to use the obscurity argument to prevent scrutiny of the systems it sells. Governments are even worse: many of them would still prefer that risk management be a matter of doctrine rather than of science.”
Let me clarify my last comment. It is really all about what we want. We just have to accept that Less Wrong is not only about refining rationality. Less Wrong also won’t be able to refine rationality if it allows the discussion of some topics in great detail, as they risk the future of this platform. So every statement here has to be taken with a grain of salt and to be put and understood in a larger context. Proclaiming the truth might be rational if you value rationality in and of itself. But since rationality is about winning you have to ask for what constitutes winning. The answer to this question is ultimately ideological and about matters of taste.
Again, you are being logically rude. I refuted (I think) the idea that “”if you don’t agree you are dumb” is a statement that has to be backed by exceptional amounts of evidence.”. Don’t switch the goalposts mid-debate. Admit that, in fact, there are some statements such that if you disagree with them, you are dumb, no massive dossier of evidence required.
So what is it that you are trying to argue which I evade? I don’t think that you can generalize from the example of avoiding to signal the intellectual superiority of LW to the general issue of political correctness. Some factual statements are simply bad arguments to use in a debate.
I’m not being logically rude, I’m just trying to argue that political correctness and epistemological issues are not necessarily mutually exclusive. Further, if you want to output a plan for action you better tweak it for real world use, which naturally must include some signaling. Only afterwards one is able to tackle the more fundamental issues of the general rationality of political correctness, e.g. overcoming human nature.
I refuted (I think) the idea that “”if you don’t agree you are dumb” is a statement that has to be backed by exceptional amounts of evidence.”.
I do not think that you have refuted it. I also believed that part of your argument was to assert that we sometimes shouldn’t keep quiet about the truth, whatever the consequences. I do not agree with that either.
Telling people they are dumb means that you are sufficiently sure that 1.) you are right 2.) they are wrong and not just more demanding (more evidence, different kinds of evidence etc.) 3.) the reason for that they disagree is that they are intellectually inferior. Further, even if you are sure someone is dumb, it is still a really bad argument as it is not persuasive. If someone is dumb you have to be even smarter to convince that person. If you just proclaim someone is dumb, maybe you are not as smart as you thought either.
Some people don’t know that they are alive. Does that mean that they are dumb? Eliezer Yudkowsky might be able to rationalize such a disorder because of all his background knowledge. But would he be able to do so if he grew up without being able to acquire his current set of skills? A lot of one’s potential intelligence is unleashed due to certain environmental circumstances, e.g. an advanced education. There are indeed people who do possess less potential. Yet if we want to make them aware of their shortcomings it is not rational to do so by telling them they are dumb but rather telling them to try to estimate their intelligence objectively. There are other, more effective ways to communicate the truth than proclaiming the conclusion.
2+2=4 if you don’t agree you are dumb.
My calculator agrees that 2+2=4, so? If someone does challenge your beliefs, it does not mean that the person is dumb but that maybe you accepted something as given that might be less obvious than you think. The complete proof of 2 + 2 = 4 involves 2,452 subtheorems in a total of 25,933 steps.
Yes, but if we are talking about real world problems then we have to deal with people who are dumb and sometimes we also have to convince them to get what we want. It is rational to limit the truth output of a forum of truth-seekers. An analogy would be the intolerance of intolerance. To maximize tolerance you have to be intolerant of intolerance. This is also the case with rationality as you won’t be able to make the world a more rational place by telling the irrational folks the truth, namely that they are irrational, that would just result in more irrational behavior.
A belief is irrational if you use irrational methods of thinking to obtain it. I consider most irrational beliefs to be the result of ignorance of or incompetence in the methods of rationality, rather than selfishness or malice. (I guess we could argue about whether anti-epistemology is an example of incompetence or of willful going-astray.)
I can’t speak for Roko, but I imagine that on Less Wrong, almost all failures of rationality are the result of incompetence.
I imagine that on Less Wrong, almost all failures of rationality are the result of incompetence.
If I’m not able to understand my failure I still want to know if one thinks I am incompetent. I won’t be able to understand how the person arrived at this conclusion, if it is due to a lack of intelligence on my side, but I’ll be able to allow for the possibility and take it into account if I ever get stuck trying to reach a goal. So if someone honestly believes that I am too dumb he/she should say so and I won’t perceive it as an insult. I just want to stress this point because he claimed that some posts, comments and the LW consensus about many things that real people deal with in the real world is actually (factually) wrong. He has to tell me because I’m not sure what he means, yet it is very important to know.
I understand what you’re saying qualitatively; I was trying to get at your quantitative estimates. The numbers will constrain your optimal strategy for extracting value from the site.
For example,if for every “good” post there are N “table-thumping” ones and N=20, it’s difficult-but-possible to find the “good” stuff. If N=200, it’s effectively impossible. If N=2, it’s pretty easy.
Conversely, at N=2 it is perhaps worth trying to convince the 2⁄3 majority to behave differently (the way you seem to be doing, sort of), but at N=20 you probably do better to figure out ways to flag the “good” 5%, concentrate your attention there, and allow the “table-thumpers” to play around on the less-valuable periphery in the hopes that maybe we’ll be inspired by your good example. (At N=200 you probably do better to create a different site where the top .5% of LW-contributions can be hosted.)
You’re right, of course, that this is a very imprecise way of talking about it. Given that I’m just asking about your off-the-cuff judgments rather than the results of your actual measurements, that seemed appropriate.
Ok, come to think of it, LWers have a higher IQ than the general population. That’s kind of not in question.
But this article was basically saying that rationalism is unpopular because there’s an IQ barrier. Most people with high IQ’s aren’t rational either, though. You can’t say “we have a rationalist worldview because we’re smart* if most smart people don’t.
On political correctness—it’s relevant in almost every discussion. I’d only cut it out entirely in cases when something serious was staked on my clear judgment and honest communication. If you’re on the crew of a sinking ship, and you think you know how to save it … then yeah, you’d better cut the crap. But almost every other conversation also has a social purpose. Signaling isn’t something you just turn off except on the rare occasions you meet a mundane.
So by default, talk on LW should include some fraction of statements whose purpose is to signal political correctness ? It seems to me that PC is a great way to make yourself irrational and horribly biased. Good for social signalling, useless for the martial art of rationality. Kind of like coming to the Dojo wearing high heels.
Some people here would say we are on a sinking ship (spaceship earth) and that this discussion is explicitly about how to save the ship by making the crew more sane. ;-0 But that may be stretching the metaphor.
Well, put it this way: I think that saying “the reason most people don’t agree with us is that they’re just not smart enough” is a bit of a jerk thing to say. It doesn’t represent us well and it’s not nice. If you’re absolutely convinced this is true, (and I think there’s not enough evidence for that), you should be much more circumspect in how you say it. Yes, I want to be pro-nice and anti-jerk.
LW does not behave like the crew of a sinking ship. As a matter of observation, it just doesn’t function that way. It’s partly a social or discussion forum.
Why do I have to be “absolutely convinced”? Can’t I give it a 90% credence and still say it? Or are we playing anti-epistemology and applying higher standards of evidence to statements that we find emotionally uncomfortable? Geez I leave for a few months and come back to find TEXTBOOK EXAMPLES of irrationality passing for LW debate! ;-)
Ok, at this point I say “oops.”
I basically posted without thinking too carefully; I knew I didn’t like something about Alexandros’ post. (Now that he’s rephrased it it’s starting to sound more plausible.) And I’m sorry if I came on too strong or insulted him personally.
Straight-up rational epistemology, where you’re only concerned with truth, is … a little unnatural to me, I have to admit. Doing it all the time (as opposed to just on special occasions when you heroically overcome your bias) would be a very different life.
Upvoted for honesty (we need more of this kind of thing I think)
I think you have a good point about image of LW—we must be careful to present ourselves, that is I think something I tend to forget, there is, in fact a need for good image, as well as for good rationality
Re. “the reason most people don’t agree with us is that they’re just not smart enough”...totally aside from the question of whether this sort of sentiment is liable to be offputting to a lot of people, I’ve very often wondered whether anyone who holds such a sentiment is at all worried about the consequences of an “Emperor’s New Clothes” effect.
What I mean by “Emperor’s New Clothes” effect is that, regardless of what a person’s actual views are on a given subject (or set of subjects), there’s really nothing stopping said person from just copying the favored vocabulary, language patterns, stated opinions, etc., of those they see as the cleverest/most prominent/most respectable members of a community they want to join and be accepted in.
E.g., in self-described “rationalist” communities, I’ve noted that lots of people involved (a) value intelligence (however they define it) highly, and (b) appear to enjoy being acknowledged as clever themselves. The easiest way to do this, of course, is to parrot others that the community of interest clearly thinks are the Smartest of the Smart. And in some situations I suspect the “parroting” can occur involuntarily, just as a result of reading a lot of the writing of someone you like, admire, or respect intellectually, even if you may not have any real, deep understanding of what you are saying.
So my question is...does anyone even care about this possibility? Or are “communities” largely in the business of collecting members and advocates who can talk the talk, regardless of what their brains are actually doing behind the scenes?
I suspect answers vary.
For my own part: if hordes of people who aren’t really rationalists start adopting, for purely signaling reasons, the trappings of epistemic hygiene… if they start providing arguments in defense of their positions and admitting when those arguments are shown to be wrong, for example, not because of a genuine desire for the truth but merely because of a desire to be seen that way… if they start reliably articulating their biases and identifying the operation of biases in others, merely because that’s the social norm… if they start tagging their assertions with confidence indicators and using those indicators consistently without actually having a deep-rooted commitment to avoiding implicitly overstating or understating their confidence… and so on and so forth…
...well, actually, I’d pretty much call that an unadulterated win. Sign me up for that future, please.
OTOH, if hordes of people just start talking about how smart and rational they are and how that makes them better than ordinary people, well, that’s not worth much to me.
I think at this point I should clarify that the article didn’t (intend to) say “the reason most people don’t agree with us is that they’re just not smart enough” but rather that for people without the capability to contribute to cutting edge maths, science, AI research and the like, our worldview is not that exciting and may therefore refuse to be convinced. Notice that it implies possible irrationality for many of the people who have joined thus far, as they commonly belong to the classes that our worldview values to a large extent. This includes myself.
Is it a nice thing to say? I personally do not feel comfortable bringing this stuff up and would prefer if things were differently. Perhaps the fact that I feel this way made this thought stand out as more urgent to discuss than many others that I have not bothered to post. In any case, this is the reality I perceive, and I’ve tried to be as inoffensive as possible while at the same time phrasing a coherent point. If anyone else is capable of expressing the core of this message in a less divisive way, they’re welcome to do so.
ok, clarified it makes more sense. I just extracted the wrong main point.
It seems to me that whether LW should include signaling statements or not, it clearly does. Changing that will require some pretty heroic efforts.
But I’ve only been here a few weeks. You’ve been around and actively engaged for a while (albeit on what seems like it was an abrupt hiatus), so I’m interested in your judgment here.
To try and get a little more concrete: suppose you had to classify all the posts and comments on LW as +/- MAR (useful/useless for the martial art of rationality) and +/- SSA (useful/useless to signal social affiliations… including but not limited to “PC” in the mainstream sense).
What ratio of +MAR:-MAR do you think you’d end up with? What ratio of +SSA:-SSA? What ratio of +MAR:+SSA?
Are there easily identifiable subsets of posts for which you think you’d end up with ratios importantly different than those? (For example, posts by particular authors, top-level posts vs. comments, or whatever.)
What ratios would you expect from a community that was actively trying to “save a sinking ship”?
To be more concrete about value for “Martial Art of Rationality” I think one would need a scalar measure, rather than a Yes/No. If you chose a Yes/No measure, your results would be very much dependent on where you drew the line. To answer the question you are asking in terms of ratios of +MAR:+SSA is really to talk about the distribution of posts in terms of rationality versus signalling.
I think that LW contains a few very very good posters, a cadre of very good posters, a few oddballs and a sea of people who sorta-kinda understand some stuff. And in many cases, it does contain posts and comments which shun rationality in order to thump the table in favour of some particular ideology or just general political correctness, which I have run foul of once or twice.
The problem, as I see it, is that if ideology gets to rule the roost on a whole host of important topics, then for those topics, LW becomes just another irrational, tribal internet echo chamber, where affirming the great chosen ideology becomes the most important task. However, I have been convinced that the benefit of having LW existing AT ALL outweighs this cost. To be plain, I think that the LW consensus is actually (factually) wrong about many things that real people deal with in the real world, but this is outweighed by the fact that LW is the only place in the world concerned with rationality.
Voted up for citing posts rather than posters for lack of rationality .
I think I’ve seen some table-thumping for political incorrectness as well as for political correctness.
So what you are saying here is that ideology is always irrational? Since this is a community blog devoted to refining rationality, why don’t you address the particular points you believe do constitute the irrational consensus? Or are you saying that some individuals here know that their ideology is irrational yet shun rationality in favor of it? That could be better phrased as there are people here who follow selfish goals and argue based on matters of taste. But how do you know that those people are aware of it, that they do not honestly believe that their disguised ideology is actually rationality?
I would love to know which kind of posts and comments, and in particular what consensus, you are referring to. This is very important to me, so if you don’t want to make it public I would like you to send me a short private message.
AND
are examples of the kind of thing that I would regard as the problem. Other examples are even more inflammatory, but basically the all boil down to:
Person1: X is a true fact about the world
Person2: But saying X is mean to {political correctness brownie points group Y}, and besides, you can’t be absolutely sure it’s true/I won’t believe it until you provide an impossibly high degree of evidence/we should stop talking about it or people will think we are mean!
The result is typically that LW can recite lots of rationalist principles, but when it comes to applying them to a significant number of real-world problems, LW is clueless.
Now one might reasonably argue that we don’t need to be right about everything. Sure, LW is mired in PC BS about X,Y and Z, but topics A,B,C,D,E, … which are also important are not subject to PC irrationality pressure. To an extent I buy this argument. However, reality is not a disconnected series of isolated topics: if you’re wrong about X,Y and Z you might make incorrect inferences about all sorts of other things.
For some of us, being perceived as nice is one of the most important ways we can help ourselves in real life. And the best way to be consistently perceived as nice is to be constantly concerned with the niceness of the statements one makes, and seriously try to avoid giving offense. If I became blunt and plain-spoken it would hurt me in real life. It would not be worth it to me. Except in very rare situations (such as if I’m personally responsible for saving lives, and I have to be deliberately rude to do it.)
In the interests of rationality, I’ll refrain in future from criticizing un-PC statements because they’re “not nice.” I don’t want to confuse anyone. But I can’t make those statements myself—that comes at a cost I won’t pay.
Do you ordinarily find that you have difficulty maintaining different registers(1) for different contexts?
If so, I sympathize and wish you luck in overcoming that difficulty. It is an enormously useful skill: as you say, being perceived as nice is valuable, and different communities perceive different kinds of behavior as nice, so you do best to learn to signal appropriately for different contexts (2).
If you don’t have difficulty with this, though, then your comment puzzles me. If you agree with FKARoko that LW norms support “un-PC” (3) posts, or in any event ought to, then what cost are you concerned about… what’s the cost? Conversely, if you don’t agree with him, why refrain from criticizing un-PC statements… what confusion?
==
(1) I mean “register” in the linguistics sense.
(2) Unless, of course, you spend all your time in only one community.
(3) Caveat: I don’t really understand what “PC” means; I’m using the term because it’s the term you and FKARoko both use. I gather you use it here as synonymous with “nice,” although in my own experience niceness often has more to do with how a statement is framed than what is actually being said.
For clarity’s sake: PC means politically correct and usually refers to political inoffensiveness. The term isn’t really apt for the current discussion because there was no talk of politics.
I don’t know if I’m great at code-switching. I can tell that LW is “not PC” or blunt-spoken. But the thing is, when some heuristic is good for you in most of your life, you may internalize it and simply make it a constant feature of your personality. For example, if it’s usually a bad idea for you to use swear words, you may be better off just not swearing at all, even when you’re in the saloon and swearing would be socially appropriate. You may want to personally identify as a non-curser. It makes double-sure that you’ll never swear at the wrong time.
If you don’t trust yourself to be socially agile in switching from situation to situation, then I think “better safe than sorry” makes sense.
Agreed as far as it goes. But that’s a big “if.”
The social agility you’re talking about is an important life skill. If I spend some time in contexts where a particular behavior has social benefits and some time in contexts where the same behavior has social costs, then I get the best results by staying aware of the context that I’m in and behaving appropriately.
That said, I do appreciate that it’s harder for some people than others. If I can’t do that, the next-best thing is to construct a superposition of rulesets and always apply it. This is similar to what you’re suggesting here… if the costs of cursing in the no-curse environments are much higher than the benefits of cursing in the yes-curse environments, adopting a “don’t curse regardless of context” rule as you suggest can work OK.
My point is, it’s a second-best option. Paying attention to my environment as it changes and responding accordingly has better payoffs, if I can manage it.
not just for some, for all of us. It is in everyone’s narrow self interest to sacrifice epistemology for signalling purposes. And in real life one has to do that. But here at least, I think that we should establish the opposite norm.
Look, what is the point of you trying to appear PC on LW? I for one am just not impressed. I already know that you’re from a certain demographic that implies lots of good things about you. But it implies bad things about you if you can’t turn off the the signalling BS in a context where it is socially very harmful, I.e. A rationality website.
Throughout the sequences it has been made clear that there usually is some local incentive for motivated cognition. Wanting to appear PC is no different: it’s just another reason that people have for blowing their thought process up, with all the usual downsides, e.g. The downside that you often simply don’t know what the cost will be because you would only be able to compute the cost of the motivated cognition if you were not engaging in it. Suffice it to say that I think we should have very strong norms against motivated cognition here on LW.
As I see it, the problem there is that saying “we shouldn’t be affected by this stuff” does not mean that we aren’t affected by this stuff. Knowing your cognitive biases allows for workarounds—it doesn’t cause them not to exist.
In particular, saying to others “you’re smart people, you should not be affected by such nuances” and then not bothering to put them into place oneself is almost a cliched way to come across as an arsehole on the Internet and have people not want to bother listening to the speaker, no matter how right they may be. The message communicated is not “you should be affected less”, but “I am inept.” This reduces one’s effectiveness.
Postel’s law: “Be conservative in what you send; be liberal in what you accept.”
If someone posts like a raging arsehole, they can be as right as they like, but people still won’t welcome them or want to listen to them. It’s not as effective a communication strategy as thinking before typing: your aim is to get the effect you want, not to win the conversation.
I speak here as a (hopefully) recovering arsehole. I have no plans to compromise the accuracy of what I’m saying, but it is useful to say it in a way that doesn’t repel people from even reading.
Sorry, I don’t understand you. Who said that someone should not be affected by cognitive biases?
I know you’re not impressed. I know folks around here don’t like it much. I’m glad there are such folks who say what they think without signaling. I respect that attitude, and it’s partly because I respect it that I’m here. I do want to know what people think when they’re solely concerned with accuracy. But I don’t really want to imitate them—maybe a little, but not thoroughly.
Truth is, I used to be socially awkward. These days, I’m not, but it’s not because I’m any cleverer at dealing with people, it’s because I’ve adopted a persona that’s all about being, let’s say, harmless. Positive and gentle. Trying to please. It’s kind of a good all-purpose heuristic—if I make some kind of faux pas, people will think “oh, she’s clueless, but she’s nice.” I’m good with nice-but-clueless.
And if you really want to be 100% nice-but-clueless, you have to be that way all the time. It’s not just political PC—I make a deliberate point of, as much as possible, never thinking or speaking badly of anyone. Not even in private. Not even in forums where the opposite norm holds. Once you start down that path, there’s a chance that you might be bitchy in public. And you can’t really afford that if you have other flaws and weaknesses, I think; I need people to forgive me my mistakes.
Would it be worth it to change? As you point out, I can’t know, because I’m within the world of motivated cognition. That said, I can think of circumstances where I probably ought to change—if I worked in the private sector, for example, or if I chose an advisor who really values frankness (both live possibilities.) There may come a point where “nice but clueless” stops working for me. And then I’ll really have to take this stuff seriously. But I have no idea who I’ll be, once I’m not nice-but-clueless.
A couple of times here, I’ve run into guys who find nice really annoying. It was useful for me to be a good bit blunter than unusual with them, and I’m inclined to believe that the experiment in flexibility was good for me.
The problem, I think, is that nice involves such a light touch that for some people, it fails to make contact.
I like nice. I prefer nice. And I think it’s got some very definite limits.
Then I think that you are in grave danger of getting pretty badly screwed by someone. There are genuinely bad people in this world, and there are lots of kind-of-bad people who will screw you over and rationalize it somehow. You have to have a healthy skepticism (not paranoia) about people’s motives, it’s the only way to prevent someone taking your money or your job/house etc. Seriously, forget the darned debate: if what you say is true, you are probably in serious danger of being taken advantage of in some way.
If I were you, I would seriously consider trying to improve your social skills the hard way, i.e. by learning social skills, and not engaging in potentially massively self-harming motivated cognition.
You may be right there.
Ah… I should have read this before replying to what you said elsewhere.
So, you’re aware that presenting as “nice but clueless” works against you in communities where cluelessness isn’t a point in your favor, but you prefer to optimize for the communities where it is.
OK, fair enough: that’s your choice to make.
I’m not sure, really. I’m open to changing my mind. I may have to, after all.
I doubt you’ll ever “have to,” in the sense of being forced to by circumstances. That’s what I meant by it being your choice to make.
Plenty of people live their entire lives optimizing for minimizing social friction at the cost of expressing their thoughts clearly and unambiguously… presenting as “nice but clueless,” in other words. “Going along to get along” is another way to say it. I suspect that as long as you make the choice to do so, you will be able to find situations that allow you to, just like they do.
That’s what value judgments are for, after all: they let you construct a preference order among possible states of the world, and therefore drive the choices you make. The decision to present as “nice but clueless” will affect the sorts of acquaintances you make, the sorts of communities you join, the sorts of organizations you work for, and so forth.
To put it differently: like it or not, you actually have a lot of power over your own future.
So the question is, how confident are you in the preference order you’re defending?
If you’re confident in it, then great… you’re choosing the world you want, which is as it should be, and I wish you joy of it.
OTOH, if you are uncertain, then I suggest that you might do better to explore the roots of that uncertainty yourself, rather than wait for events to somehow force you to change your mind.
I like that attitude. It is also not irrational because you are aware of it and deliberately choose to be that way. I believe that Less Wrong features a way too much ought. I don’t disagree with the consensus on Cryonics at all, yet I’m not getting a contract because I’m too lazy and I like to be lazy. My usual credo is, I can’t lose as long as I don’t leave my way. That doesn’t mean I am stubborn. I allow myself to alter my way situational.
Rationality is about winning and what constitutes winning is purely subjective. If you don’t care if the universe is tiled with paperclips rather than being filled with apes having sex under the stars, that is completely rational as long as you are aware what exactly you care or don’t care about.
Do you think that your beliefs regarding what you care about could be mistaken? That you might tell yourself that you care more about being lazy than about getting cryonics done, but that in fact, under reflection, you would prefer to get the contract?
I can’t solve that problem right now. It implies that part of my volition is not, in fact, part of what I want or should not be part of my goals. Why would I only listen to the part of my inner self favoring long-term decisions? I could take the car to drive to that Christmas party to visit my family and friends, or I could stay home because of black ice. After all there will be many more Christmas parties without black ice in future, and even more in the far future where there will be backups? But where does this thinking lead? I want both of course. On reflection, not dying is more important than party. But on further reflection I do not have enough data that would allow me to conclude that any long-term payoff could outweigh extensive restraint at present.
There are also some practical considerations about Cryonics. I am in Germany, I don’t know of any Cryonics companies here. I don’t know what is the likelihood of being frozen quickly enough in case of accident. When I know I’m going to die in advance then I can still get a contract then. So is the money really worth it, given that most pathways to death result in no expected benefits from a Cryonics contract?
Because this is not a private mailing list?
Imagine some scientist or politician came here to get a dose of rationality just to come across a discussion where someone argues that he knows more and then tells everyone to keep their idiot mouths shut? This happened on Less Wrong and the person who said so might have even been factually correct. Besides that this caused some uproar and damaged Less Wrong it is also a bad way of communicating truth and rationality. Stating conclusions like that is not a way to refine rationality. People do not come here to learn facts, e.g. that they are dumb, but how to arrive at such factual conclusions.
IMO if a top politician or scientist came here and found politically correct BS as the standard ideology on this so called “rational” website, they would probably sigh and close the page never to return. Why should they? They have better things to do with their time than listen to BS.
On the other hand, I don’t think they would be impressed if we didn’t have the skill to frame potentially inflammatory facts in a delicate way. I am not arguing against careful, delicate framing. I am arguing against MOTIVATED COGNITION.
I’m not suggesting that Less Wrong should conceal the truth to schmooze certain ideologies. What I am suggesting is that Less Wrong is NOT about teaching people how to score Karma points on Less Wrong but in the real world.
Less Wrong has to be able to apply rationality in a reconcilable dose rate.
Less Wrong has to keep care that it does not shut itself up in its own ivory-tower.
Less Wrong has to be focused on teaching utilizable rationality skills.
Motivated cognition can be a double-edged sword. If you overcompensate against political correctness you can easily end up pursuing an introversive self-image that leads to ingroup bias. Less Wrong has to be in an equilibrium of internal affairs and public relations.
Political correctness bias is not the cure to ingroup bias. If you have an ingroup bias problem, you solve the ingroup bias problem with the usual rationality tactics—like being honest about the weaknesses of the ingroup.
As far as I can tell, the best path is to vigorously fight PC bias and ingroup bias. You can have both. Really.
They got the SIAI funded.
The genome of the Ebola virus is a true fact about organisms. Yet it is dumb to state it on a microbiology forum. Besides, “if you don’t agree you are dumb” is a statement that has to be backed by exceptional amounts of evidence. People who already disagree can only be convinced by evidence, if they are not intelligent enough to grasp the arguments.
There are cases where data or ideas can be really hazardous. I don’t count “but it might hurt somebody’s precious feelings” as one of those cases.
I just came across this:
This seems to be neither here nor there as regards the present debate.
I assign some probability to security in obscurity working for bio, some to it not working.
2+2=4 if you don’t agree you are dumb.
Let me clarify my last comment. It is really all about what we want. We just have to accept that Less Wrong is not only about refining rationality. Less Wrong also won’t be able to refine rationality if it allows the discussion of some topics in great detail, as they risk the future of this platform. So every statement here has to be taken with a grain of salt and to be put and understood in a larger context. Proclaiming the truth might be rational if you value rationality in and of itself. But since rationality is about winning you have to ask for what constitutes winning. The answer to this question is ultimately ideological and about matters of taste.
Again, you are being logically rude. I refuted (I think) the idea that “”if you don’t agree you are dumb” is a statement that has to be backed by exceptional amounts of evidence.”. Don’t switch the goalposts mid-debate. Admit that, in fact, there are some statements such that if you disagree with them, you are dumb, no massive dossier of evidence required.
So what is it that you are trying to argue which I evade? I don’t think that you can generalize from the example of avoiding to signal the intellectual superiority of LW to the general issue of political correctness. Some factual statements are simply bad arguments to use in a debate.
I’m not being logically rude, I’m just trying to argue that political correctness and epistemological issues are not necessarily mutually exclusive. Further, if you want to output a plan for action you better tweak it for real world use, which naturally must include some signaling. Only afterwards one is able to tackle the more fundamental issues of the general rationality of political correctness, e.g. overcoming human nature.
I do not think that you have refuted it. I also believed that part of your argument was to assert that we sometimes shouldn’t keep quiet about the truth, whatever the consequences. I do not agree with that either.
Telling people they are dumb means that you are sufficiently sure that 1.) you are right 2.) they are wrong and not just more demanding (more evidence, different kinds of evidence etc.) 3.) the reason for that they disagree is that they are intellectually inferior. Further, even if you are sure someone is dumb, it is still a really bad argument as it is not persuasive. If someone is dumb you have to be even smarter to convince that person. If you just proclaim someone is dumb, maybe you are not as smart as you thought either.
Some people don’t know that they are alive. Does that mean that they are dumb? Eliezer Yudkowsky might be able to rationalize such a disorder because of all his background knowledge. But would he be able to do so if he grew up without being able to acquire his current set of skills? A lot of one’s potential intelligence is unleashed due to certain environmental circumstances, e.g. an advanced education. There are indeed people who do possess less potential. Yet if we want to make them aware of their shortcomings it is not rational to do so by telling them they are dumb but rather telling them to try to estimate their intelligence objectively. There are other, more effective ways to communicate the truth than proclaiming the conclusion.
My calculator agrees that 2+2=4, so? If someone does challenge your beliefs, it does not mean that the person is dumb but that maybe you accepted something as given that might be less obvious than you think. The complete proof of 2 + 2 = 4 involves 2,452 subtheorems in a total of 25,933 steps.
Yes, but if we are talking about real world problems then we have to deal with people who are dumb and sometimes we also have to convince them to get what we want. It is rational to limit the truth output of a forum of truth-seekers. An analogy would be the intolerance of intolerance. To maximize tolerance you have to be intolerant of intolerance. This is also the case with rationality as you won’t be able to make the world a more rational place by telling the irrational folks the truth, namely that they are irrational, that would just result in more irrational behavior.
You are being logically rude. Please don’t!
A belief is irrational if you use irrational methods of thinking to obtain it. I consider most irrational beliefs to be the result of ignorance of or incompetence in the methods of rationality, rather than selfishness or malice. (I guess we could argue about whether anti-epistemology is an example of incompetence or of willful going-astray.)
I can’t speak for Roko, but I imagine that on Less Wrong, almost all failures of rationality are the result of incompetence.
If I’m not able to understand my failure I still want to know if one thinks I am incompetent. I won’t be able to understand how the person arrived at this conclusion, if it is due to a lack of intelligence on my side, but I’ll be able to allow for the possibility and take it into account if I ever get stuck trying to reach a goal. So if someone honestly believes that I am too dumb he/she should say so and I won’t perceive it as an insult. I just want to stress this point because he claimed that some posts, comments and the LW consensus about many things that real people deal with in the real world is actually (factually) wrong. He has to tell me because I’m not sure what he means, yet it is very important to know.
I understand what you’re saying qualitatively; I was trying to get at your quantitative estimates. The numbers will constrain your optimal strategy for extracting value from the site.
For example,if for every “good” post there are N “table-thumping” ones and N=20, it’s difficult-but-possible to find the “good” stuff. If N=200, it’s effectively impossible. If N=2, it’s pretty easy.
Conversely, at N=2 it is perhaps worth trying to convince the 2⁄3 majority to behave differently (the way you seem to be doing, sort of), but at N=20 you probably do better to figure out ways to flag the “good” 5%, concentrate your attention there, and allow the “table-thumpers” to play around on the less-valuable periphery in the hopes that maybe we’ll be inspired by your good example. (At N=200 you probably do better to create a different site where the top .5% of LW-contributions can be hosted.)
You’re right, of course, that this is a very imprecise way of talking about it. Given that I’m just asking about your off-the-cuff judgments rather than the results of your actual measurements, that seemed appropriate.
Qualitative or quantitative measures of the value of LW are an interesting thing to think about. AFAIK we don’t have any at the moment.