I’m having trouble thinking up a useful response to your comment because I don’t really understand it as a whole. I understand most of the individual sentences, but when I try to pull them all together I get confused. So I’ll just respond to some isolated bits.
This reads like you reckon katydee & I were making the same point, while I’d thought I was making a different point that wasn’t a non sequitur. (Your comment seemed to me to rely on an implicit premise that making fun of things involves thinking of a good argument against them, so I disputed that implicit premise, which I’d read into your comment. But it looks like we mutually misunderstood each other. Ah well.)
Saying a response is entirely orthogonal to the thing it is responding to sort of just seems way too close to calling the author of the response a complete idiot in light of the cognitive biases inherent to the topic.
I’m not sure I follow and I don’t think I agree.
Do you think I should edit my reply to explicitly assert, “That has literally nothing to do with what I said,” regardless?
I probably would’ve if I were in your shoes. Even if katydee disagreed, the resulting discussion might have clarified things for at least one of you. (I doubt it’s worth making that edit now as this conversation’s mostly died down.) Personally, I’m usually content to tell someone outright “that’s true but irrelevant” or some such if they reply to me with a non sequitur (e.g.).
I’m seeing two likely interpretations of my original literal message:
“If you can [reason abstractly], isn’t that a good thing?”
“If you can [correct yourself], isn’t that a good thing?”
I interpreted it as saying the second one too. But in this context that point sounded irrelevant to me: if katydee warns someone that style S of argument is dangerous because it can make bad arguments sound compelling, a response along the lines of “but isn’t it good if you can correct yourself by thinking of good arguments?” doesn’t seem germane unless it leans on an implicit assumption that S is actually a reliable way of generating good arguments. (Without that qualifying assumption one could use the “but isn’t it good if you can correct yourself” argument to justify any old method of generating arguments, even generating arguments at random, because sometimes it’ll lead you to think of a good argument.)
I believe I have a bad habit of leaping between points for understanding them to be more directly obvious than they commonly are. I think it might clarify things considerably if I start from the very beginning.
When I first saw Making Fun of Things is Easy as a heading, I was pleased, because I have long recognized that numerous otherwise intelligent people have an extremely disuseful habit of refusing to spend thought on things—even to the point of failing to think about it enough to make a rational assessment of the usefulness of thinking about it—by dismissing them as “hilariously wrong.” If LessWrong is getting to the point where they’re starting to recognize positive emotional responses (laughter) can be disuseful, then I have reason to celebrate. Naturally, I had to read the article and see if my suspicion—that LessWrong is actually getting less wrong—was correct.
A large part of the damage caused by laughing things into mental obscurity is that the laughing parties lose their ability to think rationally about the subject they are laughing at. The solution to this is to stop laughing, sit down, and take ideas that you consider ridiculous as potentially holding value in being even preliminarily considered. Ideas like telepathy, for example. It’s bothersome that a community of rationalists should be unable to mentally function without excessive disclaiming. I realize this isn’t actually the case, but that members still feel the need to specify “this-isn’t-nonsense” is telling of something beyond those individual members themselves.
So I read the article, and it’s great. It touches on all the points that need to be touched upon. Then, at the very last sentence on the very last line at the very last word, I see a red flag. A warning about how your opinions could change. Good golly gosh. Wouldn’t that be ever so horrible? To have my own ability to reason used against me, by my own self, to defeat and replace my precious now-beliefs? Oh what a world!
...You can begin to see how I might derive frustration from the fact of the very problem caused by epistemic laughter was explicitly warned against solving: “Don’t make fun, but still be wary of taking the stance seriously; you might end up with different beliefs!!”
I figured I really ought to take the opportunity to correct this otherwise innocuous big red flag. I suppose my original phrasing was too dualistic in meaning to be given the benefit of the doubt that I might have a point to be making. No no, clearly I am the one who needs correcting. What does it say about this place that inferential silence is a problem strong enough to merit its own discussion? Of course the ensuing comments made and all the questions I asked were before I had identified the eye of LessWrong’s focal mass. It’s a ton easier to navigate now that I know the one localized taboo that literally every active member cannot stand is the collective “LessWrong” itself. I can be vicious and vile and impolite and still get upvoted no problem, because everyone’s here to abdicate responsibility of thought to the collective. I can attack any one person, even the popular ones, and get upvoted. The cult isn’t bound to any one individual or idea that isn’t allowed to be attacked. It is when the collective itself is attacked that the normal human response of indignation is provoked. Suffice to say all my frustration would have been bypassed if I had focused more on arguing with the individuals rather than the mass of the collective where the actual problem here lies.
To get back to your actual argument: Any method of generating an argument is useful to the point of being justified. Making fun of things is an epistemic hazard because it stops that process. Making fun of things doesn’t rely on making bad arguments against them; it relies on dismissing them outright before having argued, discussed, or usefully thought about them at all in the first place. Bad arguments at least have the redeeming quality of being easy to argue against/correct. Have you ever tried to argue against a laugh and a shrug?
A list, of the most difficult things to argue against:
“Rationalized” apathy.
Rational apathy.
Apathy.
A complex argument.
An intelligent argument.
A well-thought-out argument.
A well-constructed/prepared argument.
...
A bad argument.
Sophomoric objections.
Each of these comes in two flavors: Vanilla and meme. I’m working against memetic rationalized apathy in a community of people who generally consider themselves generally rational. If I were even a fragment less intelligent, this would be a stupid act.
The last sentence of katydee’s post doesn’t raise a red flag for me, I guess because I interpret it differently. I don’t read it as an argument against changing one’s opinion in itself, but as a reminder that the activity in footnote 2 isn’t just an idle exercise, and could lead to changing one’s mind on the basis of a cherry-picked argument (since the exercise is explicitly about trying to write an ad hoc opposing argument — it’s not about appraising evidence in a balanced, non-selective way). Warning people about changing their minds on the basis of that filteredevidence is reasonable.
I’m not too worried that inferential silence is a big enough problem on LW to merit its own discussion. While it is a problem, it’s not clear there’s an elegant way to fix it, and I don’t think LW suffers from it unusually badly; it seems like something that occurs routinely whenever humans try to communicate. As such the presence of inferential silence on LW doesn’t say anything special about LW.
The paragraph about LW being a cult where “everyone’s here to abdicate responsibility of thought to the collective” comes off to me as overblown. I’m not sure what LW’s “memetic rationalized apathy” is, either.
It looks like we interpret “making fun” differently. To me “making fun” connotes a verbal reaction, not just a laugh and a shrug. “Ha ha, get a load of this stupid idea!” is making fun, and hinges on the implicit bad (because circular) argument that an idea’s bad because it’s stupid. But a lone laugh or an apathetic shrug isn’t making fun, because there’s no real engagement; they’re just immediate & visible emotional reactions. So, as I see it, making fun often does rely on making bad arguments, even if those arguments are so transparently poor we hardly even register them as arguments. Anyway, in this paragraph, I’m getting into an argument about the meaning of a phrase, and arguments about the meanings of words & phrases risk being sterile, so I’d better stop here.
The problem is that most opinions people hold, even those of LessWrong’s users, are already based on filtered evidence. If confirmation bias wasn’t the default state of human affairs, it wouldn’t be a problem so noteworthy as to gain widespread understanding. (There are processes that can cause illegitimate spreading, but that isn’t the case with confirmation bias.) When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you’re overcoming your confirmation bias (default) on that issue for the first time. This is why it is important to respect your partner in debate; without respecting their ability to reason and think things you haven’t, their mere disagreement with your permanent correctness directly causes condescension. Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought. The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it’s not that the wrong one among them is thinking up nonsense and being negatively affected by it.
The truth is I don’t really read comments on LessWrong all that much. I can’t stand it. All I see being discussed and disagreed over are domain-specific trivial arguments. I recall someone on IRC once criticized that they hadn’t seen evidence that Eliezer_Yudkowsky ever really admits being wrong in the face of superior arguments. This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after “seeing the light.” They’re seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part. Politics is only a mind-killer insofar as the participants in the discussion are unable to correct their beliefs on physically and presently important issues. That there exist subjects that LessWrong’s users ban themselves from participation in is class A evidence of this. LessWrong only practices humble rationality in the realm of things that are theoretically relevant. The things that are actually shown to matter are taboo to even bring up because that might cause people to “realize” (confirmation bias) that they’re dealing with people they consider to be idiots. Slow progress is being made in terms of rationality by this community, but it is so cripplingly slow by my standards that it frustrates me. “You could do so much better if you would just accept this one single premise as plausible!” The end result is that LessWrong is advancing, yes, but not at a pace that exceeds the bare minimum of the average.
Everything this community has done up to now is a good warm-up, but now I’d like to start seeing some actual improvement where it counts.
It’s not the mere existence of inferential silence that is the issue here. Inferential silence exists everywhere on digital forums. What’s relevant is the exact degree to which the inferential silence occurs. For example, if nobody commented, upvoted, or downvoted, then LessWrong is just a disorganized blog. If all the topics worth discussing have their own posts and nobody posts anything new, and everyone stops checking for new content, the site is effectively dead. The measuring of inferential silence has the same purpose as asking, “Is this site useful to me?” Banned subjects are a severe form of inferential silence. We’re rationalists. We ought to be able to discuss any subject in a reasonable manner. Other places, when someone doesn’t care about a thread, they just don’t bother reading it. Here, you’re told not to post it. Because it’s immoral to distract all these rationalists who are supposed to be advancing the Singularity with temptation to debate (seemingly) irrelevant things. LessWrong places next to no value on self-restraint; better to restrain the world instead.
This is the part where things get difficult to navigate.
I predicted your reaction of considering the coherency of the collective as overblown. I’d already started modeling responses in my head when I got up from the computer after posting the comment. I don’t predict you’re terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you. What I’m unsure about is the degree to which you are aware that you stand out from the rest of these folks. You’re exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are. You offer conversational pleasantries and pleasant offers for conversation, but that can be both a consciously recognized utility or an unconscious one, with varying degrees in between. I can already tell, though, what kind of path you’ve been on with that behavior. You’ll have seen and experienced things that most other LWers have not. Basically, what sets you apart is that you don’t suck at conversation.
It’s not that I predicted that you’d disagree or be unsure about what I was referring to, it’s more that the idea I understand, by virtue of being able to understand it, inherently lets me know that you will immediately agree if you can actually grasp the concept. It’s not that you’ll immediately see everything I have; that part will take time. What will happen is that you’ll have grasped the concept and the means to test the idea, though you’ll feel uncertain about it. You’ll of course be assessing the data you collect in the opposite of the manner that I do; while I search for all the clues indicating negatives, you’ll search for clues and reasoning that leave LessWrong with less blame—or maybe you’ll try to be more neutral about it (if you can determine where the middle ground lies). I wrote my last comment because I’d already concluded that you’ll be able to fully grasp my concept; but be forewarned: Understanding my lone hypothesis in light of no competing hypotheses could change your beliefs! (An irreparable change, clearly.)
I’ve more to say, but it won’t make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won’t notice anything out of the ordinary about the thoughts you’ll have thought in reading/responding to/pondering this. These prediction, again, will appear to be mundane.
When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you’re overcoming your confirmation bias (default) on that issue for the first time.
That’s not obvious to me. I’d expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding to like a particular politician or organization in the first place, and would probably, having made that decision, try to remain aware opposing evidence exists.
Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought.
I’m less optimistic. While nonsensical ad hoc arguments do provoke thoughts, those thoughts are sometimes things like, “Jesus, am I doomed to hear that shitty pseudo-argument every time I talk to people about this?” or “I already pre-empted that dud counterargument and they ignored me and went ahead and used it anyway!” or “Huh?!”, rather than “Oh, this other person seems to have misunderstanding [X]; I’d better say [Y] to try disabusing them of it”.
The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it’s not that the wrong one among them is thinking up nonsense and being negatively affected by it.
Unfortunately a lot of arguments don’t seem to be between “otherwise rational people”, in the sense you give here.
All I see being discussed and disagreed over are domain-specific trivial arguments.
But I’ve seen (and occasionally participated in) arguments here about macroeconomics, feminism, HIV & AIDS, DDT, peak oil, the riskiness of the 80,000 Hours strategy of getting rich to donate to charity, how to assess the importance of technologies, global warming, how much lead exposure harms children’s development, astronomical waste, the global demographic transition, and more. While these are domain-specific issues, I wouldn’t call these trivial. And I’ve seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level. (What’s the right tradeoff between epistemic & instrumental rationality? When should one trust science? How does the social structure of science affect the reliability of the body of knowledge we call science? How does one decide on priors? What are good 5-second skills that help reinforce good rationalist habits? Where do the insights & intuitions of experts come from? How feasible is rationality training for people of ordinary IQ?)
This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after “seeing the light.” They’re seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part.
That’s too vague for me to have a strong opinion about. (Presumably you don’t literally mean “nobody”, and I don’t know precisely which beliefs you’re referring to with “deep beliefs”.) But there are possible counterexamples. People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW. I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.
The things that are actually shown to matter are taboo to even bring up because that might cause people to “realize” (confirmation bias) that they’re dealing with people they consider to be idiots.
That’s a bit of an unfair & presumptuous way to put it. It’s not as if LW only realized human brains run on cognitive biases once it started having flamewars on taboo topics. The ubiquity of cognitive bias is the central dogma of LW if anything is; we already knew that the people we were dealing with were “idiots” in this respect. For another thing, there’s a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.
Everything this community has done up to now is a good warm-up, but now I’d like to start seeing some actual improvement where it counts.
Finally I can agree with you on something! Yes, me too, and we’re by no means the only ones. (I recognize I’m part of the problem here, being basically a rationalist kibitzer. I would be glad to be more rational, but I’m too lazy to put in the actual effort to become more rational. LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.)
We’re rationalists. We ought to be able to discuss any subject in a reasonable manner.
Ideally, yes. Unfortunately, in reality, we’re still human, with the same bias-inducing hot buttons as everyone else. I think it’s legitimate to accommodate that by recognizing some topics reliably make people blow up, and cultivating LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion). (I’d be worried if I thought LWers wanted to “restrain the world”, as you grandiosely put it, by extending that norm to everywhere beyond this community. But I don’t.)
I predicted your reaction of considering the coherency of the collective as overblown. [...] I don’t predict you’re terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you.
Yeh, pretty much!
What I’m unsure about is the degree to which you are aware that you stand out from the rest of these folks. You’re exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are.
This is flattering and I’d like to believe it, but I suspect I’m just demonstrating my usual degree of getting the last word-ism, crossed with Someone Is Wrong On The Internet Syndrome. (Although this is far from the worst flare-up of those that I’ve had. Since then I’ve tried not to go on & on so much, but whether I’ve succeeded is, hrrrm, debatable.)
I’ve more to say, but it won’t make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won’t notice anything out of the ordinary about the thoughts you’ll have thought in reading/responding to/pondering this.
Right again. I still don’t have any idea what your concept/hypothesis is (although I expect it’ll be an anticlimax after all this build-up), but maybe what I’ve written here gives you some idea of how to pitch it.
Although I expect it’ll be an anticlimax after all this build-up.
It will, despite my fantasies, be anticlimactic, as you predict. While I predicted this already, I didn’t predict that you would consciously and vocally predict this yourself. My model updates as thus: Though I was not consciously aware of the possibility of stating my predictions being an invitation for you to state your own set of predictions, I am now aware that such a result is possible. What scenarios the practice is useful in, why it works, how it fails, when it does, and all such related questions are unknown. (This could be why my brain didn’t think to inform my consciousness of the possibility, now that I think about it in writing this.) A more useful tool is that I can now read that prediction as a strong potential from a state of it not having been stated; I can now read inferential silence slightly better. If not for general contexts, then at least for LessWrong to whatever degree. Most useful of all data packed into that sentence is this: I now know, dividing out the apathy, carelessness, and desires for the last word and Internet Correction, what you’re contextually looking for in this conversation. Effectively, I’m measuring your factored interest in what I have to say. The next factor to divide out is the pretense/build-up.
Maybe what I’ve written here gives you some idea of how to pitch it.
Certainly so, insofar as you were willing to reply. Though you didn’t seem it and there was no evidence, the thought crossed my mind that I’d gone too far and you were just not going to bother responding. I didn’t think I exceeded your boundaries, but I’ve known LessWongers to conceal their true standards, in order to more fully detect “loonies” or “crackpots.”
There’s no sentence I can form (Understand style) that will stun you with sheer realization (rather than be more likely to convince you of lessened intelligence). This is primarily because building the framework for such realizations results in a level of understanding that makes the lone trigger assertion seem mundane by conceptual ambiance. That is, I predict that there is nothing I can say about my predictions of you that you will both recognize as accurate while also recognizing as extraordinary. I have one more primary prediction to make, but I’ll keep it to myself for the moment.
I predict that there is nothing I can say about my predictions of you that you will both recognize as accurate while also recognizing as extraordinary.
Yes, I expect whatever big conclusion you’re winding up to will prove either true & trivial, or surprising & false. (I am still a bit curious as to whether you’ll take the boring route or the crackpot route, although my curiosity is hardening into impatience.)
Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? I’m not asking if you can fathom or rationalize up a reason, I’m requesting the raw original basis for the assumption.
Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if we didn’t properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It’s useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)
Macroeconomics: Semi-legitimate topic. There is room for severe rational disagreement. Implications for most participants in such discussions is very low, classifying the topic as irrelevant, despite the room for opinion variance.
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
DDT: What’s to discuss? “Should it have been done?” From my understanding this is an issue of the past and thus qualifies as trivial by virtue of being causally disconnected from future actions. Not saying discussing the past isn’t useful, but it’s not exactly boldly adventurous thinking on anyone’s part.
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side? If you wish to argue that wrongness ought to be downvoted, I can address that.
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion. It is always, every time, about whether or not that is the most optimal route. Since nobody is actually going to do anything useful as the result of such discussions, yes, literally, intellectual masturbation.
How to assess the importance of technologies: Semi-legitimate topic. What we need here are theories, new ideas, hypotheses; in spades. LessWrong hates that. New ideas, ideas that stands out, heck, anything less than previously established LessWrong general consensus, is downvoted. You could say LessWrong argues how to assess the importance, but never actually does assess the importance.
Global warming: Fully legitimate topic.
“How much lead exposure harms children’s development:” It’s a bad thing. What’s to argue or discuss? (Not requesting this as a topic, but demonstrating why I don’t think LessWrong’s discussing it is useful in any way.)
Astronomical waste: Same as above.
Global demographic transition: Legitimate, useful even, but trivial in the sense that most of what you’re doing is just looking at the data coming out; I don’t see any real immediate epistemic growth coming out of this.
And I’ve seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level.
Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It’s useful, but not in excess of the average usefulness coming from any other field of study or social process.
People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. Events like these are where we’re beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.
For another thing, there’s a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.
That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.
I think it’s legitimate to [cultivate] LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion).
This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger. You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. I’m not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I’ve yet to fully ascertain, but it is advanced by this conversation.
LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.
Interesting article, and I’m surprised/relieved/excited to see just how upvoted it’s been. I can say this much: Wanting the last word, wanting to Correct the Internet… These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You’re part of the solution, but you’re not seeing it yet, because you’re not seeing how far behind the mass really is.
I’d be worried if I thought LWers wanted to “restrain the world”, as you grandiosely put it.
LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong’s existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we’re seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I’ve met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization’s mindset. This moment in history is far off, so we’re safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? [...] I’m requesting the raw original basis for the assumption.
LWers self-report having above-average IQs. (One can argue that those numbers are too high, as I’ve done, but those are just arguments about degree.) People with more cognitive firepower to direct at problems are presumably going to do so more often.
LWers self-report above-average AQs. (Again, one might argue those AQs are exaggerated, but the sign of the effect is surely right given LW’s nerdy bent.) This is evidence in favour of LWers being people who tend to automatically apply a fine-grained (if not outright pedantic) and systematic thinking style when confronted with a new person or organization to think about.
Two linked observations. One: a fallacy/heuristic that analytical people often lean on is treating reversed stupidity as intelligence. Two: the political stupidity that an analytical person is likely to find most salient is the stupidity coming from people with firmly held, off-centre political views. Bringing the two together: even before discovering LW, LWers are the kind of analytical types who’d apply the reversed stupidity heuristic to politics, and infer from it that the way to avoid political stupidity is to postpone judgement by trying to look at Both Sides before committing to a political position.
Every time Eliezer writes a new chapter of his HPMoR fanfic, LW’s Discussion section explodes in a frenzy of speculation and attempts to integrate disparate blobs of evidence into predictions about what’s going to happen next, with a zeal most uninterested outside observers might find hard to understand. In line with nerd stereotype, LWers can’t even read a Harry Potter story without itching to poke holes in it.
(Have to dash out of the house now but I’ll comment on the rest soon.)
Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if [...]
I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they’re nearly useless. (There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.)
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
Hmm. I tend not to wade into the arguments about feminism so I don’t remember any examples that unambiguously meet your criteria, and some quick Googlesearches don’t give me any either, although you might have more luck. Still, even without evidence on hand sufficient to convince a sceptic, I’m fairly sure feminism, and related issues like pick-up artistry and optimal ways to start romantic relationships, are contentious topics on LW. (In fact I think there’s something approaching a mild norm against gratuitously bringing up those topics because Less Wrong Doesn’t Do Them Well.)
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side?
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion.
I had hoped that your going through my list of examples point by point would clarify how you were judging which topics were “legitimate” & nontrivial, but I’m still unsure. In some ways it seems like you’re judging topics based on whether they’re things LWers are actually doing something about, but LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of the) global demographic transition. So what makes the former more legit than the latter?
People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
The point I meant to make in bringing that up was not that you should cheer people on for dedicating time & money to FAI; it was that people doing so is an existence proof that some LWers are “changing their deep beliefs after ‘seeing the light.’”. If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. [...] I would immediately hold an interview in response to such a stimulus.
Unless my memory’s playing tricks on me, Eliezer did ask that person to elaborate, but got no response.
This is where I can help, [...] I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. [...] What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
It seems pretty sensible to me to demand evidence when someone on the fringes of an established community says they’re convinced they know exactly (1) how to singlehandedly overhaul that community, and (2) what to aim for in overhauling it.
I also can’t divine the answer you have in mind, either.
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger.
I don’t think you’re making the argument you think you are. The argument I’m hearing is that LW isn’t reaching its full potential because LWers sit around jacking each other off rather than getting shit done. You haven’t actually mounted an argument for your own managerial superiority yet.
You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. [...] I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level.
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special. The only evidence you’ve cited so far is that I’ve bothered to argue with you!) I doubt I’m nearly as astute as you think I am, not least because I can’t discern what you’re saying when you start laying on the gnomic flattery.
LessWrong is a single point within a growing Singularity. [Rest of paragraph snipped.]
My own hunch: LW will carry on being a reasonable but not spectacular success for MIRI. It’ll continue serving as a pipeline of potential donors to (and workers for) MIRI & CFAR, growing steadily but not astoundingly for another decade or so until it basically runs its course.
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
OK, yes, if the LW memeplex went viral and imprinted itself on the minds of an entire generation, then by definition it’d be silly for me to airily say, “oh, that’s just an LW-specific meme, nothing to worry about”. But I don’t worry about that risk much for two reasons: the outside view says LW most likely won’t be that successful; and people love to argue politics, and are likely to argue politics even if most of them end up believing in (and overinterpreting) “Politics is the Mindkiller”. Little political scuffles still break out here, don’t they?
There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of) the global demographic transition. So what makes the former more legit than the latter?
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Eliezer did ask that person to elaborate, but got no response.
Ah. No harm, no foul, then.
You haven’t actually mounted an argument for your own managerial superiority yet.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
I need to you to be slightly more self-aware in order [...]
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
My own hunch:
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state?
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve?
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
This would appear to be false.
So it would. Thank you for taking the time to track down those articles.
Thank you for updating in the face of evidence.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world.
Fair enough.
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. [...] “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. [...] I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active.
I suppose that’s true...
That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.
I’m having trouble thinking up a useful response to your comment because I don’t really understand it as a whole. I understand most of the individual sentences, but when I try to pull them all together I get confused. So I’ll just respond to some isolated bits.
This reads like you reckon katydee & I were making the same point, while I’d thought I was making a different point that wasn’t a non sequitur. (Your comment seemed to me to rely on an implicit premise that making fun of things involves thinking of a good argument against them, so I disputed that implicit premise, which I’d read into your comment. But it looks like we mutually misunderstood each other. Ah well.)
I’m not sure I follow and I don’t think I agree.
I probably would’ve if I were in your shoes. Even if katydee disagreed, the resulting discussion might have clarified things for at least one of you. (I doubt it’s worth making that edit now as this conversation’s mostly died down.) Personally, I’m usually content to tell someone outright “that’s true but irrelevant” or some such if they reply to me with a non sequitur (e.g.).
I interpreted it as saying the second one too. But in this context that point sounded irrelevant to me: if katydee warns someone that style S of argument is dangerous because it can make bad arguments sound compelling, a response along the lines of “but isn’t it good if you can correct yourself by thinking of good arguments?” doesn’t seem germane unless it leans on an implicit assumption that S is actually a reliable way of generating good arguments. (Without that qualifying assumption one could use the “but isn’t it good if you can correct yourself” argument to justify any old method of generating arguments, even generating arguments at random, because sometimes it’ll lead you to think of a good argument.)
I believe I have a bad habit of leaping between points for understanding them to be more directly obvious than they commonly are. I think it might clarify things considerably if I start from the very beginning.
When I first saw Making Fun of Things is Easy as a heading, I was pleased, because I have long recognized that numerous otherwise intelligent people have an extremely disuseful habit of refusing to spend thought on things—even to the point of failing to think about it enough to make a rational assessment of the usefulness of thinking about it—by dismissing them as “hilariously wrong.” If LessWrong is getting to the point where they’re starting to recognize positive emotional responses (laughter) can be disuseful, then I have reason to celebrate. Naturally, I had to read the article and see if my suspicion—that LessWrong is actually getting less wrong—was correct.
A large part of the damage caused by laughing things into mental obscurity is that the laughing parties lose their ability to think rationally about the subject they are laughing at. The solution to this is to stop laughing, sit down, and take ideas that you consider ridiculous as potentially holding value in being even preliminarily considered. Ideas like telepathy, for example. It’s bothersome that a community of rationalists should be unable to mentally function without excessive disclaiming. I realize this isn’t actually the case, but that members still feel the need to specify “this-isn’t-nonsense” is telling of something beyond those individual members themselves.
So I read the article, and it’s great. It touches on all the points that need to be touched upon. Then, at the very last sentence on the very last line at the very last word, I see a red flag. A warning about how your opinions could change. Good golly gosh. Wouldn’t that be ever so horrible? To have my own ability to reason used against me, by my own self, to defeat and replace my precious now-beliefs? Oh what a world!
...You can begin to see how I might derive frustration from the fact of the very problem caused by epistemic laughter was explicitly warned against solving: “Don’t make fun, but still be wary of taking the stance seriously; you might end up with different beliefs!!”
I figured I really ought to take the opportunity to correct this otherwise innocuous big red flag. I suppose my original phrasing was too dualistic in meaning to be given the benefit of the doubt that I might have a point to be making. No no, clearly I am the one who needs correcting. What does it say about this place that inferential silence is a problem strong enough to merit its own discussion? Of course the ensuing comments made and all the questions I asked were before I had identified the eye of LessWrong’s focal mass. It’s a ton easier to navigate now that I know the one localized taboo that literally every active member cannot stand is the collective “LessWrong” itself. I can be vicious and vile and impolite and still get upvoted no problem, because everyone’s here to abdicate responsibility of thought to the collective. I can attack any one person, even the popular ones, and get upvoted. The cult isn’t bound to any one individual or idea that isn’t allowed to be attacked. It is when the collective itself is attacked that the normal human response of indignation is provoked. Suffice to say all my frustration would have been bypassed if I had focused more on arguing with the individuals rather than the mass of the collective where the actual problem here lies.
To get back to your actual argument: Any method of generating an argument is useful to the point of being justified. Making fun of things is an epistemic hazard because it stops that process. Making fun of things doesn’t rely on making bad arguments against them; it relies on dismissing them outright before having argued, discussed, or usefully thought about them at all in the first place. Bad arguments at least have the redeeming quality of being easy to argue against/correct. Have you ever tried to argue against a laugh and a shrug?
A list, of the most difficult things to argue against:
“Rationalized” apathy.
Rational apathy.
Apathy.
A complex argument.
An intelligent argument.
A well-thought-out argument.
A well-constructed/prepared argument.
...
A bad argument.
Sophomoric objections.
Each of these comes in two flavors: Vanilla and meme. I’m working against memetic rationalized apathy in a community of people who generally consider themselves generally rational. If I were even a fragment less intelligent, this would be a stupid act.
I find that reply easier to follow, thanks.
The last sentence of katydee’s post doesn’t raise a red flag for me, I guess because I interpret it differently. I don’t read it as an argument against changing one’s opinion in itself, but as a reminder that the activity in footnote 2 isn’t just an idle exercise, and could lead to changing one’s mind on the basis of a cherry-picked argument (since the exercise is explicitly about trying to write an ad hoc opposing argument — it’s not about appraising evidence in a balanced, non-selective way). Warning people about changing their minds on the basis of that filtered evidence is reasonable.
I’m not too worried that inferential silence is a big enough problem on LW to merit its own discussion. While it is a problem, it’s not clear there’s an elegant way to fix it, and I don’t think LW suffers from it unusually badly; it seems like something that occurs routinely whenever humans try to communicate. As such the presence of inferential silence on LW doesn’t say anything special about LW.
The paragraph about LW being a cult where “everyone’s here to abdicate responsibility of thought to the collective” comes off to me as overblown. I’m not sure what LW’s “memetic rationalized apathy” is, either.
It looks like we interpret “making fun” differently. To me “making fun” connotes a verbal reaction, not just a laugh and a shrug. “Ha ha, get a load of this stupid idea!” is making fun, and hinges on the implicit bad (because circular) argument that an idea’s bad because it’s stupid. But a lone laugh or an apathetic shrug isn’t making fun, because there’s no real engagement; they’re just immediate & visible emotional reactions. So, as I see it, making fun often does rely on making bad arguments, even if those arguments are so transparently poor we hardly even register them as arguments. Anyway, in this paragraph, I’m getting into an argument about the meaning of a phrase, and arguments about the meanings of words & phrases risk being sterile, so I’d better stop here.
The problem is that most opinions people hold, even those of LessWrong’s users, are already based on filtered evidence. If confirmation bias wasn’t the default state of human affairs, it wouldn’t be a problem so noteworthy as to gain widespread understanding. (There are processes that can cause illegitimate spreading, but that isn’t the case with confirmation bias.) When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you’re overcoming your confirmation bias (default) on that issue for the first time. This is why it is important to respect your partner in debate; without respecting their ability to reason and think things you haven’t, their mere disagreement with your permanent correctness directly causes condescension. Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought. The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it’s not that the wrong one among them is thinking up nonsense and being negatively affected by it.
The truth is I don’t really read comments on LessWrong all that much. I can’t stand it. All I see being discussed and disagreed over are domain-specific trivial arguments. I recall someone on IRC once criticized that they hadn’t seen evidence that Eliezer_Yudkowsky ever really admits being wrong in the face of superior arguments. This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after “seeing the light.” They’re seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part. Politics is only a mind-killer insofar as the participants in the discussion are unable to correct their beliefs on physically and presently important issues. That there exist subjects that LessWrong’s users ban themselves from participation in is class A evidence of this. LessWrong only practices humble rationality in the realm of things that are theoretically relevant. The things that are actually shown to matter are taboo to even bring up because that might cause people to “realize” (confirmation bias) that they’re dealing with people they consider to be idiots. Slow progress is being made in terms of rationality by this community, but it is so cripplingly slow by my standards that it frustrates me. “You could do so much better if you would just accept this one single premise as plausible!” The end result is that LessWrong is advancing, yes, but not at a pace that exceeds the bare minimum of the average.
Everything this community has done up to now is a good warm-up, but now I’d like to start seeing some actual improvement where it counts.
It’s not the mere existence of inferential silence that is the issue here. Inferential silence exists everywhere on digital forums. What’s relevant is the exact degree to which the inferential silence occurs. For example, if nobody commented, upvoted, or downvoted, then LessWrong is just a disorganized blog. If all the topics worth discussing have their own posts and nobody posts anything new, and everyone stops checking for new content, the site is effectively dead. The measuring of inferential silence has the same purpose as asking, “Is this site useful to me?” Banned subjects are a severe form of inferential silence. We’re rationalists. We ought to be able to discuss any subject in a reasonable manner. Other places, when someone doesn’t care about a thread, they just don’t bother reading it. Here, you’re told not to post it. Because it’s immoral to distract all these rationalists who are supposed to be advancing the Singularity with temptation to debate (seemingly) irrelevant things. LessWrong places next to no value on self-restraint; better to restrain the world instead.
This is the part where things get difficult to navigate.
I predicted your reaction of considering the coherency of the collective as overblown. I’d already started modeling responses in my head when I got up from the computer after posting the comment. I don’t predict you’re terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you. What I’m unsure about is the degree to which you are aware that you stand out from the rest of these folks. You’re exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are. You offer conversational pleasantries and pleasant offers for conversation, but that can be both a consciously recognized utility or an unconscious one, with varying degrees in between. I can already tell, though, what kind of path you’ve been on with that behavior. You’ll have seen and experienced things that most other LWers have not. Basically, what sets you apart is that you don’t suck at conversation.
It’s not that I predicted that you’d disagree or be unsure about what I was referring to, it’s more that the idea I understand, by virtue of being able to understand it, inherently lets me know that you will immediately agree if you can actually grasp the concept. It’s not that you’ll immediately see everything I have; that part will take time. What will happen is that you’ll have grasped the concept and the means to test the idea, though you’ll feel uncertain about it. You’ll of course be assessing the data you collect in the opposite of the manner that I do; while I search for all the clues indicating negatives, you’ll search for clues and reasoning that leave LessWrong with less blame—or maybe you’ll try to be more neutral about it (if you can determine where the middle ground lies). I wrote my last comment because I’d already concluded that you’ll be able to fully grasp my concept; but be forewarned: Understanding my lone hypothesis in light of no competing hypotheses could change your beliefs! (An irreparable change, clearly.)
I’ve more to say, but it won’t make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won’t notice anything out of the ordinary about the thoughts you’ll have thought in reading/responding to/pondering this. These prediction, again, will appear to be mundane.
That’s not obvious to me. I’d expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding to like a particular politician or organization in the first place, and would probably, having made that decision, try to remain aware opposing evidence exists.
I’m less optimistic. While nonsensical ad hoc arguments do provoke thoughts, those thoughts are sometimes things like, “Jesus, am I doomed to hear that shitty pseudo-argument every time I talk to people about this?” or “I already pre-empted that dud counterargument and they ignored me and went ahead and used it anyway!” or “Huh?!”, rather than “Oh, this other person seems to have misunderstanding [X]; I’d better say [Y] to try disabusing them of it”.
Unfortunately a lot of arguments don’t seem to be between “otherwise rational people”, in the sense you give here.
But I’ve seen (and occasionally participated in) arguments here about macroeconomics, feminism, HIV & AIDS, DDT, peak oil, the riskiness of the 80,000 Hours strategy of getting rich to donate to charity, how to assess the importance of technologies, global warming, how much lead exposure harms children’s development, astronomical waste, the global demographic transition, and more. While these are domain-specific issues, I wouldn’t call these trivial. And I’ve seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level. (What’s the right tradeoff between epistemic & instrumental rationality? When should one trust science? How does the social structure of science affect the reliability of the body of knowledge we call science? How does one decide on priors? What are good 5-second skills that help reinforce good rationalist habits? Where do the insights & intuitions of experts come from? How feasible is rationality training for people of ordinary IQ?)
That’s too vague for me to have a strong opinion about. (Presumably you don’t literally mean “nobody”, and I don’t know precisely which beliefs you’re referring to with “deep beliefs”.) But there are possible counterexamples. People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW. I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.
That’s a bit of an unfair & presumptuous way to put it. It’s not as if LW only realized human brains run on cognitive biases once it started having flamewars on taboo topics. The ubiquity of cognitive bias is the central dogma of LW if anything is; we already knew that the people we were dealing with were “idiots” in this respect. For another thing, there’s a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.
Finally I can agree with you on something! Yes, me too, and we’re by no means the only ones. (I recognize I’m part of the problem here, being basically a rationalist kibitzer. I would be glad to be more rational, but I’m too lazy to put in the actual effort to become more rational. LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.)
Ideally, yes. Unfortunately, in reality, we’re still human, with the same bias-inducing hot buttons as everyone else. I think it’s legitimate to accommodate that by recognizing some topics reliably make people blow up, and cultivating LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion). (I’d be worried if I thought LWers wanted to “restrain the world”, as you grandiosely put it, by extending that norm to everywhere beyond this community. But I don’t.)
Yeh, pretty much!
This is flattering and I’d like to believe it, but I suspect I’m just demonstrating my usual degree of getting the last word-ism, crossed with Someone Is Wrong On The Internet Syndrome. (Although this is far from the worst flare-up of those that I’ve had. Since then I’ve tried not to go on & on so much, but whether I’ve succeeded is, hrrrm, debatable.)
Right again. I still don’t have any idea what your concept/hypothesis is (although I expect it’ll be an anticlimax after all this build-up), but maybe what I’ve written here gives you some idea of how to pitch it.
[Comment length limitation continuance...]
It will, despite my fantasies, be anticlimactic, as you predict. While I predicted this already, I didn’t predict that you would consciously and vocally predict this yourself. My model updates as thus: Though I was not consciously aware of the possibility of stating my predictions being an invitation for you to state your own set of predictions, I am now aware that such a result is possible. What scenarios the practice is useful in, why it works, how it fails, when it does, and all such related questions are unknown. (This could be why my brain didn’t think to inform my consciousness of the possibility, now that I think about it in writing this.) A more useful tool is that I can now read that prediction as a strong potential from a state of it not having been stated; I can now read inferential silence slightly better. If not for general contexts, then at least for LessWrong to whatever degree. Most useful of all data packed into that sentence is this: I now know, dividing out the apathy, carelessness, and desires for the last word and Internet Correction, what you’re contextually looking for in this conversation. Effectively, I’m measuring your factored interest in what I have to say. The next factor to divide out is the pretense/build-up.
Certainly so, insofar as you were willing to reply. Though you didn’t seem it and there was no evidence, the thought crossed my mind that I’d gone too far and you were just not going to bother responding. I didn’t think I exceeded your boundaries, but I’ve known LessWongers to conceal their true standards, in order to more fully detect “loonies” or “crackpots.”
There’s no sentence I can form (Understand style) that will stun you with sheer realization (rather than be more likely to convince you of lessened intelligence). This is primarily because building the framework for such realizations results in a level of understanding that makes the lone trigger assertion seem mundane by conceptual ambiance. That is, I predict that there is nothing I can say about my predictions of you that you will both recognize as accurate while also recognizing as extraordinary. I have one more primary prediction to make, but I’ll keep it to myself for the moment.
Yes, I expect whatever big conclusion you’re winding up to will prove either true & trivial, or surprising & false. (I am still a bit curious as to whether you’ll take the boring route or the crackpot route, although my curiosity is hardening into impatience.)
Do you have any actual reason (introspection doesn’t count) to “expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding”? I’m not asking if you can fathom or rationalize up a reason, I’m requesting the raw original basis for the assumption.
Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we’d be neglecting our duty as rationalists and Bayesians alike if we didn’t properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It’s useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)
Macroeconomics: Semi-legitimate topic. There is room for severe rational disagreement. Implications for most participants in such discussions is very low, classifying the topic as irrelevant, despite the room for opinion variance.
Feminism: Arguably a legitimate point to contend over. I’ll allow this as evidence in counter to my stance if you can convince me that it was being legitimately argued: Someone would need to legitimately hold the stance of a feminist and not budge in terms of, “Well, I take feminism to mean...” Basically, I don’t really believe this is a point of contention rather than discussion for the generalized LessWrong collective.
HIV & AIDS: Can’t perform assessment. Was anyone actually positing non-consensus ideas in the discussion?
DDT: What’s to discuss? “Should it have been done?” From my understanding this is an issue of the past and thus qualifies as trivial by virtue of being causally disconnected from future actions. Not saying discussing the past isn’t useful, but it’s not exactly boldly adventurous thinking on anyone’s part.
Peak oil: Very legitimate topic. Surprised to hear that it was discussed here. Tell me though, how did the LessWrong collective vote on the comments composing the discussion? Is there a clear split composed of downvotes for comments arguing the dangers of peak oil, and upvotes for the other side? If you wish to argue that wrongness ought to be downvoted, I can address that.
Getting rich to donate to charity: Trivial. Absolutely trivial. This is the kind of LessWrong circlejerking that upsets me the most. It’s never a discussion about how to actually get rich, or what charities to donate to, which problems to solve, who is best qualified to do it, or any such useful discussion. It is always, every time, about whether or not that is the most optimal route. Since nobody is actually going to do anything useful as the result of such discussions, yes, literally, intellectual masturbation.
How to assess the importance of technologies: Semi-legitimate topic. What we need here are theories, new ideas, hypotheses; in spades. LessWrong hates that. New ideas, ideas that stands out, heck, anything less than previously established LessWrong general consensus, is downvoted. You could say LessWrong argues how to assess the importance, but never actually does assess the importance.
Global warming: Fully legitimate topic.
“How much lead exposure harms children’s development:” It’s a bad thing. What’s to argue or discuss? (Not requesting this as a topic, but demonstrating why I don’t think LessWrong’s discussing it is useful in any way.)
Astronomical waste: Same as above.
Global demographic transition: Legitimate, useful even, but trivial in the sense that most of what you’re doing is just looking at the data coming out; I don’t see any real immediate epistemic growth coming out of this.
Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It’s useful, but not in excess of the average usefulness coming from any other field of study or social process.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
That is intensely interesting and the kind of thing I’d yell at you for not looking more into, let alone remembering only dimly. Events like these are where we’re beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.
That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.
This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that’s an extraordinary claim; I’m not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
I’m left with having to argue that I’m on a level where I can manage a community of rationalists. It’s not an argument any LessWronger is going to like very much at all. You’re able to listen to it now because you’re not the average LessWronger. You’re different, and if you’ve properly taken the time to reflect on the opening question of this comment, you’ll know exactly why that is. I’m not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that’s hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I’ve yet to fully ascertain, but it is advanced by this conversation.
Interesting article, and I’m surprised/relieved/excited to see just how upvoted it’s been. I can say this much: Wanting the last word, wanting to Correct the Internet… These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You’re part of the solution, but you’re not seeing it yet, because you’re not seeing how far behind the mass really is.
LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong’s existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we’re seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I’ve met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization’s mindset. This moment in history is far off, so we’re safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW’s memetic moment were right now? Now is LessWrong merely restraining its own members?
[Comment length reached, continuing...]
LWers self-report having above-average IQs. (One can argue that those numbers are too high, as I’ve done, but those are just arguments about degree.) People with more cognitive firepower to direct at problems are presumably going to do so more often.
LWers self-report above-average AQs. (Again, one might argue those AQs are exaggerated, but the sign of the effect is surely right given LW’s nerdy bent.) This is evidence in favour of LWers being people who tend to automatically apply a fine-grained (if not outright pedantic) and systematic thinking style when confronted with a new person or organization to think about.
Two linked observations. One: a fallacy/heuristic that analytical people often lean on is treating reversed stupidity as intelligence. Two: the political stupidity that an analytical person is likely to find most salient is the stupidity coming from people with firmly held, off-centre political views. Bringing the two together: even before discovering LW, LWers are the kind of analytical types who’d apply the reversed stupidity heuristic to politics, and infer from it that the way to avoid political stupidity is to postpone judgement by trying to look at Both Sides before committing to a political position.
Every time Eliezer writes a new chapter of his HPMoR fanfic, LW’s Discussion section explodes in a frenzy of speculation and attempts to integrate disparate blobs of evidence into predictions about what’s going to happen next, with a zeal most uninterested outside observers might find hard to understand. In line with nerd stereotype, LWers can’t even read a Harry Potter story without itching to poke holes in it.
(Have to dash out of the house now but I’ll comment on the rest soon.)
I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they’re nearly useless. (There are situations where this isn’t so, namely educational ones, where having a pupil or student express their muddled understanding makes it possible to correct them. But I don’t think you have that sort of didactic context in mind.)
Hmm. I tend not to wade into the arguments about feminism so I don’t remember any examples that unambiguously meet your criteria, and some quick Google searches don’t give me any either, although you might have more luck. Still, even without evidence on hand sufficient to convince a sceptic, I’m fairly sure feminism, and related issues like pick-up artistry and optimal ways to start romantic relationships, are contentious topics on LW. (In fact I think there’s something approaching a mild norm against gratuitously bringing up those topics because Less Wrong Doesn’t Do Them Well.)
Yep. The person I ended up arguing with was saying that HIV isn’t an STD, that seroconversion isn’t indicative of HIV infection, and that there’s not much reason to think microscopic pictures of HIV are actually of HIV. (They started by saying they had 70% confidence “that the mainstream theory of HIV/AIDS is solid”, but what they wrote as the thread unfolded made clear that their effective degree of confidence was really much less.)
Here’s the discussion I had in mind.
I quickly skimmed the conversation I was thinking of and didn’t see a clear split. But you can judge for yourself.
Here’s a post on deciding which charities to donate to. Here’s a student asking how they can get rich for effective altruism. Here’s a detailed walkthrough of how to maximize the cash you get when searching for a programming job. Here’s someone asking straightforwardly how they can make money. Here’s Julia Wise wondering which career would allow her to donate the most money.
This would appear to be false.
Whether it affects children’s development to such a degree that it can explain future variations in violent crime levels.
I had hoped that your going through my list of examples point by point would clarify how you were judging which topics were “legitimate” & nontrivial, but I’m still unsure. In some ways it seems like you’re judging topics based on whether they’re things LWers are actually doing something about, but LWers aren’t (as far as I know) doing anything more about global warming or peak oil than they are about astronomical waste or the (insufficient speed of the) global demographic transition. So what makes the former more legit than the latter?
The point I meant to make in bringing that up was not that you should cheer people on for dedicating time & money to FAI; it was that people doing so is an existence proof that some LWers are “changing their deep beliefs after ‘seeing the light.’”. If someone goes, “gee, I used to think I should devote my life to philosophy/writing/computer programming/medicine/social work/law, but now I read LW I just want to throw money at MIRI, or fly to California to help out with CFAR”, and then they actually follow through, one can hardly accuse them of not changing their deep beliefs!
Unless my memory’s playing tricks on me, Eliezer did ask that person to elaborate, but got no response.
It seems pretty sensible to me to demand evidence when someone on the fringes of an established community says they’re convinced they know exactly (1) how to singlehandedly overhaul that community, and (2) what to aim for in overhauling it.
I also can’t divine the answer you have in mind, either.
I don’t think you’re making the argument you think you are. The argument I’m hearing is that LW isn’t reaching its full potential because LWers sit around jacking each other off rather than getting shit done. You haven’t actually mounted an argument for your own managerial superiority yet.
How about this: I need you to spell out what you mean with this “true face of LessWrong” stuff. (And ideally why you think I’m different & special. The only evidence you’ve cited so far is that I’ve bothered to argue with you!) I doubt I’m nearly as astute as you think I am, not least because I can’t discern what you’re saying when you start laying on the gnomic flattery.
My own hunch: LW will carry on being a reasonable but not spectacular success for MIRI. It’ll continue serving as a pipeline of potential donors to (and workers for) MIRI & CFAR, growing steadily but not astoundingly for another decade or so until it basically runs its course.
OK, yes, if the LW memeplex went viral and imprinted itself on the minds of an entire generation, then by definition it’d be silly for me to airily say, “oh, that’s just an LW-specific meme, nothing to worry about”. But I don’t worry about that risk much for two reasons: the outside view says LW most likely won’t be that successful; and people love to argue politics, and are likely to argue politics even if most of them end up believing in (and overinterpreting) “Politics is the Mindkiller”. Little political scuffles still break out here, don’t they?
I do, actually, which raises the question as to why you think I didn’t have that in mind. Did you not realize that LessWrong and pretty much our entire world civilization is in such a didactic state? Moreover, if we weren’t in such a didactic state, why does LessWrong exist? Does the art of human rationality not have vast room to improve? This honestly seems like a highly contradictory stance, so I hope I’m not attacking a straw man.
So it would. Thank you for taking the time to track down those articles. As always, it’s given me a few new ideas about how to work with LessWrong.
I was using a rough estimate for legitimacy; I really just want LessWrong to be more of an active force in the world. There are topics and discussions that further this process and there are topics and discussion that simply do not. Similarly, there are topics and discussions where you can pretend you’re disagreeing, but not really honing your rationality in any way by participating. For reference, this conversation isn’t honing our rationality very well; we’re already pretty finely tuned. What’s happening between us now is current-optimum information exchange. I’m providing you with tangible structural components, and you’ve providing me with excellent calibration data.
Oh but that is very much exactly what I can do!
In each and every one of those cases you will find that the person had not spent sufficient time reflecting on the usefulness of thought and refined reasoning, or else uFAI and existential risks. The state these ideas existed in their mind in was not a “deep belief” state, but rather a relatively blank slate primed to receive the first idea that came to mind. uFAI is not a high-class danger; EY is wrong, and the funding and effort is, in large part, illegitimate. I am personally content leaving that fear, effort and funding in place precisely because I can milk it for my own personal benefit. Does every such person who reads the sequences run off to donate or start having nightmares about FAI punishing them for not donating? Absolutely, positively; this is not the case.
Deep beliefs are an entirely different class of psychological construct entirely. Imagine I am very much of the belief that AI cannot be created because there’s something fundamental in the organic brain that a machine cannot replicate. What will reading every AI-relevant article in the sequences get me? Will my deep (and irrational) beliefs be overridden and replaced with AI existential fear? It is very difficult for me to assume you’ll do anything but agree that such things do not happen, but I must leave open the possibly that you’ll see something that I missed. This is a relatively strong belief of mine, but unlike most others, I will never close myself off to new ideas. I am very much of the intention that child-like plasticity can be maintained so long as I do not make the conscious decision to close myself off and pretend I know more than I actually do.
Ah. No harm, no foul, then.
I’ve been masking heavily. To be honest, my ideas were embedded many replies ago. I’m only responding now insofar as seeing what all you have to offer, what level you’re at, and what levels and subjects you’re overtly receptive to. (And on the off-chance, picking up an observer or two.)
“Self-aware” is a non-trivial aspect here. It’s not something I can communicate simply by asserting it, because you can only trust the assertion so much, especially given that the assertion is about you. Among other things, I’m measuring the rate at which you come to realizations. “If you’ve properly taken the time to reflect on the opening question of this comment,” is more than enough of a clue. That you haven’t put the reflection in simply from my cluing gives me a very detailed picture of how much you currently trust my judgment. I actually thought it was pretty awesome that you responded to the opening question in an isolated reply and had to rush out right after answering it, giving you very much more time to have reflected on it than the case of serially reading and replying without expending too much mental effort in doing so. I’m really not here to convince you of my societal/managerial competence by direct demonstration; this is just gathering critical calibration data on my part.
I’ve already spelled it out pretty damn concisely. Recognizing the differences between yourself and the people you like to think are very much like you is uniquely up to you.
Yeah, pretty much. LessWrong’s memetic moment in history isn’t necessarily at a point in time at which it is active. That’s sort of the premise of the concern of LessWrong’s immediate memeplex going viral. As the population’s intelligence slowly increases, it’ll eventually hit a sweet spot where LessWrong’s content will resonate with it.
...But yeah, ban on politics isn’t one of the dangerous LessWrong memes.
I did not. And do not, in fact. Those didactic states are states where there’s someone who’s clearly the teacher (primarily interested in passing on knowledge), and someone who’s clearly the pupil (or pupils plural — but however many, the pupil(s) are well aware they’re not the teacher). But on LW and most other places where grown-ups discuss things, things don’t run so much on a teacher-student model; it’s mostly peers arguing with each other on a roughly even footing, and in a lot of those arguments, nobody’s thinking of themselves as the pupil. Even though people are still learning from each other in such situations, they’re not what I had in mind as “didactic”.
In hindsight I should’ve used the word “pedagogical” rather than “didactic”.
I think these questions are driven by misunderstandings of what I meant by “didactic context”. What I wrote above might clarify.
Thank you for updating in the face of evidence.
Fair enough.
I interpreted “deep beliefs” as referring to beliefs that matter enough to affect the believer’s behaviour. Under that interpretation, any new belief that leads to a major, consistent change in someone’s behaviour (e.g. changing jobs to donate thousands to MIRI) would seem to imply a change in deep beliefs. You evidently have a different meaning of “deep belief” in mind but I still don’t know what (even after reading that paragraph and the one after it).
Hrmm. Well, that wraps up that branch of the conversation quite tidily.
I suppose that’s true...
...but I’d still soften that “will” to a “might, someday, conceivably”. Things don’t go viral in so predictable a fashion. (And even when they do, they often go viral as short-term fads.)
Another reason I’m not too worried: the downsides of LW memes invading everyone’s head would be relatively small. People believe all sorts of screamingly irrational and generally worse things already.