Less Wrong lacks direction
I think the greatest issue with Less Wrong is that it lacks direction. There doesn’t appear to be anyone driving it forward or helping the community achieve its goals. At the start this role was taken by Eliezer, but he barely seems active these days. The expectation seems to be that things will happen spontaneously, on their own. And that has worked for a few things (is. SubReddit, study hall, ect.), but on the while the community is much less effective than it could be.
I want to give an example of how things could work. Let’s imagine Less Wrong had some kind of executive (as opposed to moderators who just keep everything in order). At the start of the year, they could create a thread asking about what goals they thought were important for Less Wrong—ie. Increasing the content in main, producing more content for a general audience, increasing female participation rate.
They would then have a Skype meeting to discuss the feedback and to debate which ones that wanted to primarily focus on. Suppose for example they decided they wanted to increase the content in main. They might solicit community feedback on what kinds of articles they’d like to see more of. They might contact people who wrote discussion posts that were main quality and suggest they submit some content there instead. They could come up with ideas of new kinds of contentLW might find useful (ie. Project management) and seed the site with content on that area to do that people understand that kind of content is desired.
These roles would take significant work, but I imagine people would be motivated to do this by altruism or status. By discussing ideas in person (instead of just over the internet), they there would be more of an opportunity to build a consensus and they would be able to make more progress towards addressing these issues.
If a group said that they thought A was an important issue and the solution was X, most members would pay more attention than if a random individual said it. No-one would have to listen to anything they say, but I imagine that many would choose to. Furthermore if the exec were all actively involved in the projects, I imagine they’d be able to complete some smaller ones themselves, or at least provide the initial push to get it going.
- Ideas to Improve LessWrong by 25 May 2015 22:55 UTC; 16 points) (
- 13 Aug 2015 9:51 UTC; 2 points) 's comment on Ideas on growth of the community by (
I think the issue you are seeing is that Less Wrong is fundamentally a online community / forum, not a movement or even a self-help group. “Having direction” is not a typical feature of such a medium, nor would I say that it would necessary be a positive feature.
Think about it this way. The majority of the few (N < 10) times I’ve seen explicit criticism of Less Wrong, one of the main points cited was that Less Wrong had a direction, and that said direction was annoying. This usually refereed to Less Wrong focusing on the FAI question and X-risk, though I believe I’ve seen the EA component of Less Wrong challenged as well. By its nature, having direction is exclusionary—people who disagree with you stop feeling welcome in the community.
With that said, I strongly caution about trying to change Less Wrong to import direction to the community as a whole (e.g. by having an official “C.E.O”). With that said, organizing a sub-movement within Less Wrong for that sort of thing carries much less risk of alienating people. I think that would be the most healthy direction to take it, plus it allows you to grow organically (since people can easily join/leave your movement and you don’t need to get the entire community mobilized to get started).
I think these concerns are good if we expect the director(s) (/ the process of determining LessWrong’s agenda) to not be especially good. If we do expect them the director(s) to be good, then they should be able to take your concerns into account—include plenty of community feedback, deliberately err on the side of making goals inclusive, etc. -- and still produce better results, I think.
If you (as an individual or as a community) don’t have coherent goals, then exclusionary behavior will still emerge by accident; and it’s harder to learn from emergent mistakes (‘each individual in our group did things that would be good in some contexts, or good from their perspective, but the aggregate behavior ended up having bad effects in some vague fashion’) than from more ‘agenty’ mistakes (‘we tried to work together to achieve an explicitly specified goal, and the goal didn’t end up achieved’).
If you do have written-out goals, then you can more easily discuss whether those goals are the right ones—you can even make one of your goals ‘spend a lot of time questioning these goals, and experiment with pursuing alternative goals’—and you can, if you want, deliberately optimize for inclusiveness (or for some deeper problem closer to people’s True Rejections). That creates some accountability when you aren’t sufficiently inclusive, makes it easier to operationalize exactly what we mean by ‘let’s be more inclusive’, and makes it clearer to outside observers that at least we want to be doing the right thing.
(This is all just an example of why I think having explicit common goals at all is a good idea; I don’t know how much we do want to become more inclusive on various axes.)
You make a good point, and I am very tempted to agree with you. You are certainly correct in that even a completely non-centralized community with no stated goals can be exclusionary. And I can see “community goals” serving a positive role, guiding collective behavior towards communal improvement, whether that comes in the form of non-exclusiveness or other values.
With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual. I’m not sure what my intuition is stuck on, but I do feel that it might be important. My best interpretation right now is that having an actively directed community may lend itself to catastrophic failure (in the same way that having a dictatorship lends itself to catastrophic failure).
If there is a single person or group of people directing the community, I can imagine them making decisions which anger the rest of the community, making people take sides or split from the group. I’ve seen that happen in forums where the moderators did something controversial, leading to considerable (albeit usually localized) disruption. If the community is directed democratically, I again see people being partisan and taking sides, leading to (potentially vicious) internal politics; and politics is both a mind killer and a major driver of divisiveness (which is typically bad for the community).
Now, to be entirely fair, these are somewhat “worst case” scenarios, and I don’t know how likely they are. However, I am having trouble thinking of any successful online communities which have taken this route. That may just be a failure of imagination, or it could be that something like this hasn’t been tried yet, but it is somewhat alarming. That is largely why I urge caution in the instance.
“With that said, I find myself strangely disquieted by the idea of Less Wrong being actively directed, especially by a singular individual.”—the proposal wasn’t that a single individual would choose the direction, but that there would be a group.
Do you think the words ”...Less Wrong being actively directed, especially by a committee” would cause less apprehension? X-)
Maybe. It’s hard to say.
Agreed that LW is in a kind of stagnation. However, I think that just someone writing a series of high-quality posts would suffice to fix it. Now, the amount of discussion in comments is quite good, the problem is that there aren’t many interesting posts.
It isn’t quite a good thing; many people noticed that LW is somewhat like Eliezer’s echo chamber. Actually, we should endorse high-quality opinions different from LW mainstream.
What are your heuristics for telling whether posts/comments contain “high-quality opinions,” or “LW mainstream”? Also, what did you think of Loosemore’s recent post on fallacies in AI predictions?
It’s just my impression; I don’t claim that it is precise.
As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.
Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post’s terminology), which I find unlikely. I’m 95% certain that if the AGI is going to be built anytime soon, it will be based on machine learning; anyway, the claim that CLAI is “the only meaningful class of AI worth discussing” is far from being true.
I think LW might actually be suffering from something like a collective affective death spiral.
I’m not sure how it relates to proposed stagnation (i.e loss of momentum) of LW community. Could you please elaborate? I understand affective death spirals to mean something completely different I am totally confused.
It’s quite easy (and in fact almost inevitable) to get carried away with a theory (as in a bunch of axiomatic ideas together with a logical framework) you have. “As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.” Thus you will cease to question the theory and cease to truly go beyond it, leading to stagnation.
What is the theory that you think LW has such a spiral around?
The idea that you can actually optimize your thought processes using deliberate rational will and analysis of biases, as exemplified by the home page, and specifically the extreme version of this idea that some users try to adopt.
Can you unpack “optimizing thought processes”? Under some definitions the statement is questionable, under others trivially true.
Also, the articles you’ve linked to describe techniques that are very popular outside—so if they are overrated, it isn’t a LW-specific mistake.
I can try to elaborate on the criticisms of the pages I linked. There hasn’t been any study of the long-term effects of spaced repetition. There are indications that it may be counter-productive and that it may act as an artifical ‘importance inflator’ of information, desensitizing the brain’s long-term response to new knowledge that is actually important, especially if one is not consciously aware of that.
About the pomodoro technique, it’s even less researched than spaced repetition and there’s very little solid evidence that it works. One thing that seems a bit worrying is that it seems like a ‘desperate measure’ adopted by people experiencing low productivity, indicating some other problem (depression/burnout etc.) that should be dealt with directly. In these cases pomodoros would make things far worse.
It could be said that none of these are criticisms of LW, but just criticisms of these specific techniques that arose outside of LW. However, if one is too eager to adopt and believe in such techniques, it betrays ADS-type thinking as relating to the idea that optimization of thought processes can be done through ‘productivity hacks’.
How are you distinguishing an affective death spiral from people thinking that something is a good idea?
People using Anki and Pomodoros (neither of which were invented on LW or by LWers) doesn’t look extreme to me.
TDT, FAI (esp. CEV), acausal trading, MWI—regardless whether they are true or not, the level of criticism is lower than one would expect; either because of the Halo effect or ADS.
I see these things being discussed here from time to time. I don’t see any general booming of them, still less any increasing trend. Eliezer, of course, has boomed MWI quite strongly; but he is no longer here.
My impression is that inside LW they are usually assumed true, while outside LW they are usually assumed false or highly questionable. Again, I’m not saying that these theories are wrong, but the pattern looks suspicious; almost every LW’s non-mainstream belief can be traced back to Eliezer. What a coincidence. One of the possible explanations is the halo effect of the Sequences. Or they are actually underrated outside LW. Or my impressions are distorted.
I’m going with distorted.
Take MWI for example; apparently a lot of people are under the impression that LWers must be ~100% MWI fanatics. But the annual surveys report that lukewarm endorsements of MWI as the least bad QM interpretation covers, what, <50% of respondents? And it’s not clear to me that LW is even different from mainstream physicists, since the occasional polls of them show MWI keeps becoming more popular. It seems like people overgeneralize from the generally respectful treatment of MWI as a valid alternative (as opposed to early criticism of it as nonsense or crackpot pseudoscience) and from MWI topics being a lot more fun to discuss than, say, Copenhagen.
Or, global pandemics are regularly rated in the survey as a very concerning x-risk up there with AI, but are discussed much less; possibly because the risk of pandemics seems well-appreciated by society at large and there’s little new to discuss.
Similarly for some of the other stereotypical beliefs; critics like Stross and XiXiDu have been campaigning to turn Roko’s basilisk into the defining shibboleth of LW, but do even <5% of LWers take it seriously or as more than an obscure hypothetical in one superseded decision theory? (I don’t think so but in that case I can’t prove it with survey data.)
And with TDT and acausal trading, they’re technical and difficult enough, relying heavily on formal logic and decision theory, that it’s hard to make any comments on them at all, either pro or con. Personally, I don’t believe in acausal trading. But I also don’t ever come out and talk about it, because I don’t feel I understand it or UDT/TDT well, am not particularly interested in them, and have nothing new to contribute to conversations about them; so why would I write about them, and if I were writing about them, why would you or anyone want to read what I wrote?
I’m not really sure the issue is about “direction”, but more about people who have enough time and ideas to write awesome (or at least, interesting) posts like the Sequences (the initial ones by Eliezer or the additional ones by various contributors).
What I would like to see are sequences of posts that build on each other, starting from the basics and going to deep things (a bit like Sequences). It could be collective work (and then need a “direction”), but it could also be the work of a single person.
As for myself, I did write a few posts (a few in main and a few in discussions) but if I didn’t write recently is mostly because of three issues :
Lack of time, like I guess many of us.
The feeling of not being “good enough”, that’s the problem with a community of “smart” people like LW, with high quality base content (the Sequences), it’s a bit intimidating.
The “taboo” subjects (like politics) which I do understand and respect, but they limit what I could write about.
There are a few things I would like to write about, but either I feel I lack the skill/knowledge to do it at LW level (point 2) or they border too much the “taboo” subjects (point 3).
If someone’s goal is status, why write on LW instead of writing a personal blog?
Writing on Less Wrong makes it easier to reach a large audience. New personal blogs would have trouble getting readers unless they were advertised in strategic places.
Perhaps they want status among the rationalist community?
If I read a blog post on a personal blog it’s more likely that a month later I remember who wrote the blog post than when I read it on LW.
Yeah, but it’s not fair to start with “given that I read a post and it was on a personal blog...” if the odds of you reading said post in the first place is higher when posted on LW rather than someone’s personal blog that you may not be aware of or check regularly.
I’m not sure “direction” is the right phrase. It’s not like Eliezer told other people what posts to write—he had a goal and a thing to explain, and then he explained it. People didn’t read his posts because he had authority; they read his posts because someone recommended them as being useful and interesting.
And similarly, I don’t think the absence of roles is the issue: I think it’s the absence of time and topics. Maybe there are people with something to write who aren’t writing it because of some block that a clearly visible person could help remove—but it seems more likely to me that there are people with something to say but no time to say it, or with the time to write posts but little to talk about, and there are fewer intersections of those two as time goes on.
Or they simply write the post on their own blogs.
This assumes that the period when Eliezer was writing a lot was pretty optimal, and we should be trying to recapitulate that era of success. Maybe the thing Eliezer was trying to do has now succeeded, and we should be using the site in a very different way now? Or, if he didn’t succeed, there may still be better goals to move toward than ‘lots of people are active and have interesting discussions about miscellaneous topics a la early-LW’.
The OP states that, and I think I agree, in the sense that if there’s a topic that would be as general and as interesting as the Sequences, then I would rather someone write a long text about it here than them not do that. I suspect, as I’ve argued elsewhere, that there isn’t something in that class; there are lots of interesting things for people to do now, but they aren’t going to be as common an interest.
I do think that there are changes LW could make to adapt to the new community and purpose for it, but I’m not sure here is the best place to discuss that.
I don’t think the site as a whole needs a “new” direction. It needs continued conversation, new sub-projects, and for the members to engage with the community.
Less Wrong has developed its own conventions for argument, reference points for logic, and traditions of interpretation of certain philosophical, computational, and every day problems. The arguments all occur within a framework which implicitly furnishes the members with a certain standard of thinking and living (which we don’t always live up to).
Maybe what you really want is for people in the community to find a place where they can excel and contribute more. What we need most is to continue to develop ways people can contribute. Not force the generation of projects from above.
this was an unhelpful comment, removed and replaced by the comment you are now reading
ha!