Tangent: If there is a desire to recruit more people to LW I’d like to see a concerted effort to target domain experts we don’t have. You start to get diminishing returns adding more computer programmers with amateur interests in hard science.
I’m doubtful LW will ever become really good at recruiting.
I went to the page with the Recent Posts and crunched some numbers. Of 250 recent posts, 35 were on meetups, 25 were continuous threads like Quotes or Potter Commentary, 28 were things that require you believe in our particular formulation of transhuman singularitarianism before the premise even makes sense, and 13 were meta about the site itself. That leaves a hundred fifty that could realistically draw new people in. Of those, 55 had <15 karma; ie we couldn’t even find fifteen people on this site who were willing to say they were any good and so newbies are unlikely to like them much. That leaves 95. Of those, about 25 were about specialized math-y topics that most IQ 130+ rationalists don’t even realize exist and have no background in, like decision theory. So that leaves maybe 70 posts since May that really in the running for attracting newbies. Many of these posts are in my opinion either so soft and vague as to be boring, or else filled with jargon.
Honestly, I’m surprised we’re getting new people at the rate we are.
I haven’t looked as closely at this as you but from what I can tell the site looks much better if we only examine promoted posts. Meta and AI posts rarely make the front page. What does fill up the front page are the meet-up announcements. I’m not sure what to do about that since promoting that aspect of the site does seem important. It wouldn’t looks so bad if we were getting more front page worthy rationality/science posts.
Does anyone other than me have a bunch of post ideas they’ve been meaning to write but keep procrastinating on? If there are a couple of us, I have an idea.
28 were things that require you believe in our particular formulation of transhuman singularitarianism before the premise even makes sense,
I’ll just say it: for reasons totally unrelated to recruitment I’d like to see considerably less of these. People are pretty good about voting down the FAI ones (which are nearly always horrendously sloppy). But my sense is that, in general, posts on transhumanist/singularity topics are noticeably less rigorous and less insightful than posts on other topics. Why this is, is a rather interesting question. My guess is the lack of a foil. With topics in science, philosophy and rationality there are established authorities with empirical work to bolster our position and professional studies for us to criticize and engage with. When I did high school debate my best matches were with debaters much more talented and experienced than I because I would rise to their level. Conversely I would sink to the level of my least talented opponents. The problem with transhumanist/singularity topics is that a) people here already agree with the most talented contributors in the related fields and b) many of the other contributors in those fields are quite mediocre relative to domain experts in more mainstream fields.
I believe LW is at it’s best when it is engaging and criticizing mainstream institutions and popular ways of doing things and trying to provide alternatives. This is particularly the case when the target is a representative of traditional rationality. Now of course there is a Less Wrong critique of the mainstream regarding singularity and transhuman topics it’s just a rather simple and straight forward one which has been covered dozens of times. It is simple because most people, institutions and systems of thought have made zero effort to engage with transhumanism and Singulatarianism. There is no foil.
I realize of course I have no where near the standing to dictate the way Less Wrong should be. But occasional months where AI was forbidden as a topic, as it was in the very beginning, would I think, rejuvenate our discussions.
And yes, I realize I’m the guy who posted about Pet Cryonics in the discussion section.
Thanks for this analysis Yvain. I’m glad you’re interested in this even if you are (rightfully) pessimistic.
I agree that the ongoing community dynamic is a terrible place to “jump into” LW. I’ve been wanting help from someone (any volunteers out there??) to help design a more friendly, more Wikipedia-ish homepage for LessWrong which could actually help non-hardcore LW users navigate the site in a way that makes sense. The promoted stream is such a horrible resource for that. If someone could make a mock-up of a potential LW homepage (on the LW wiki perhaps?), I could get it implemented for you if it’s even halfway respectable. Nothing could be worse than what we currently have. A good homepage would probably have a few stable pointers to the important sequences, a portion of the current promoted feed, a video or other tutorial explaining what LW is all about… I dunno. Just anything that isn’t an opaque listing of “Open threads!”, “Cryonics!”, “Torture scenarios!”, “Rationality Quotes!”, “Omega!”, “Meetups!”, “Cake!”
For what it’s worth, the thing that got me visiting here regularly was the list of EY’s OB posts.
I know y’all love the sequences as such, and I can understand why, but the fact remains that I was motivated to work my way through much of that material relatively systematically in a chronological format, but once I got to the end of that list—that is, the time of the OB-to-LW migration—I bogged down (1). The sequence/wiki style is less compelling to me than the chronological style, especially given the degree to which the posts themselves really are blog posts and not articles.
I suspect that having some sense of where the process terminates is a key aspect of that. If I don’t know how long the process is going to be, it’s harder to get up the energy to work through it.
Anyway, to the extent that appealing to people like me is a valuable subgoal(2), it seems what you should do is assemble a simple chronological list of LW posts from the dawn of the site that are most representative of what you’d like the site content to look like. (3)
==
(1) To be fair, that was also the start of the Fun Theory sequence, which I am finding less compelling than some of its predecessors, so the content may bear some of the responsibility for my bogged-down-ness… but not all of it, nor even (I think) most of it.
(2) Which isn’t intended as false modesty; I just mean there’s no particular reason to believe that what appeals to me will appeal to anyone else.
(3) It may be sufficient to take the highest-voted tier of posts for each month, say, and string them together chronologically… though you’d probably want to backfill other posts that they depend on.
This is one of the few places on the net where cryonics related topics are discussed in a coherent manner, particularly by people in my age bracket. And I like that while it’s cryonics friendly, it’s not about cryonics. It’s about the (supposed) rational benefit of cryonics. Not just any cryonics related topic gets discussed, and not just in any way.
I can see how the community might grow to a point where the different topics cannot all be discussed in the same place. The current division between discussion and main is already helpful for distinguishing formal versus informal discussions. Perhaps sub-groups for cryonics, decision theory, existential risk, etc. are good ideas. But separating them out runs the risk of creating trivial inconvenience to cross-pollination of ideas.
An updated promotion policy that keeps meetup threads, discussion threads and obscure singularity-heavy (and other nominally off-topic) threads from being promoted could improve the situation.
A quick glance at the front page shows that 9 of the articles currently sitting on it are for meetups, 7 of which are over. I really think meet ups at the very least should be taken off the front page. I’d also favour at least half the horizontal width being used for pointers to places to dive into the sequences, and potentially even a featured article of the week/month.
28 were things that require you believe in our particular formulation of transhuman singularitarianism before the premise even makes sense
This. This is the thing that has to be fixed before LessWrong can claim to primarily be about “refining the art of human rationality.”
AI is interesting, but not about rationality. Cryonics is interesting, but not about rationality. Nanotechnology is interesting, but not about rationality.
Ways of thinking are about rationality. Ways of being smart are about rationality. Ways of being stupid are about rationality. Stories of failing spectacularly are about rationality.
The first group may of course be highly relevant to the second. But it’s not about it, and requiring readers of a site advertised as being on the topic of “rationality” to buy into the standard transhumanist belief cluster is a failure of signaling rationality, and thus for these purposes a failure of instrumental rationality.
Before posting, don’t just think “Is this interesting to the LW readership?” but also “and is it on topic? Is my actual point about rationality?”
Honestly, I’m surprised we’re getting new people at the rate we are.
It’s an interesting site full of really smart people, and (and this is a real plus point) the comment quality is consistently high because people buy into the moderation system, i.e. “mod up if you want to see more comments like this.” That’s a BIG WIN. Keeps me reading.
Really. I strongly suggest you just write a main section post requesting people stay on-topic in posts, and that not clearly being on-topic about rationality is a reason to downvote. See what the community thinks.
I agree that generally most of the newer content we have is less interesting than the older, but maybe that’s because the older already covers a huge amount of LW’s subject base pretty well. The good thing is, the old content is all still here and good for attracting new readers, if new readers are what we care about. That said, they could be better promoted, easier to find, navigate and search.
Of course, speaking selfishly I’d really like it if we had new content that was as solid and useful as Eliezer’s work, and some of the other ‘classic’ content. Perhaps we need more LW people from different backgrounds than we currently cover?
I think the logical next step is to translate the sequences into a book or several books of some kind. I vaguely remember EY talking about something about writting a book on rationality, but I could be mistaken. This might make LW very good at recruiting new members compared to now it the book is successful and well read for years to come.
Everyone who’s actually read the sequences—and that’s a lot of reading, too much for almost any casual reader—should try summarising and condensing them themselves for their own circle of net-friends. That’s how to get a meme out there. You’ll be a hell of a lot more convincing to people you know if you’re saying something than if you point to someone else saying something.
Eliezer is writing the Sequences into a rationality book, but I agree with David Gerard’s suggestion that LW readers should try summarizing his ideas in their own words. This could be an important topic for discussion here on LW: making sure that some of the core ideas discussed on this site aren’t lost in translation.
Honestly, whenever I read through Omega-related posts, I feel like we might be trying to re-invent Calvinist predestination theology. These sort of paradoxes of free will have been hashed out in monastaries and seminaries for almost two millenia, by intelligent people who were as rational as their understanding of the universe allowed. I wonder if even a theology student has something to contribute here.
It’s an interesting idea, but we might need more emphasis on becoming more rational to improve your life (rather than on FAI) to attract them.
This relates to something else I was thinking about—at this point, LW is a pretty small social group. If there were 100,000 active members, which I think is theoretically possible, then LW would develop substructures of some sort.
Less Wrong doesn’t seem to be on Face Book. Should it be?
It’s an interesting idea, but we might need more emphasis on becoming more rational to improve your life (rather than on FAI) to attract them.
Agreed. Or perhaps rationality as a way of improving science?
This relates to something else I was thinking about—at this point, LW is a pretty small social group. If there were 100,000 active members, which I think is theoretically possible, then LW would develop substructures of some sort.
Thats a terrifyingly large number to think about. Do you think 100,000 is active while maintaining the current median IQ? Or without the signal to noise ratio dropping significantly?
The structure of the site would have to change significantly. We’d have to live with more specialization, no one would be able to follow all of that content.
Let’s run some casual analysis on intelligence level and LW.
My impression is that my intelligence level is between 1 in 1000 and 1 in 10,000. It isn’t hard for me to find compatible people in science fiction fandom. I think I’d resist arguments that it’s below 1 in 500 unless there was very good evidence.
I fit in well here, but am not of the top rank, partly due to lack of math and partly due to lack of ambition. It’s possible that even if I had more desire to write top level posts, I still wouldn’t be as good at it as Alicorn.
I don’t bother trying to follow the strategic maneuvering in MOR (I suspect I’m not alone in this), and appreciated the bit recently where Harry lost track. Sorry—don’t remember the details.
I haven’t found a lot of blogs where I’d say that the intellectual level is as high as LW, but I can recommend Making Light. The posters are very high on the verbal skills side and have respect for rational argument.
So, let’s start with 500 million minimally potential LW users. One in a thousand of them would be 500,000.
1 in 5 of them showing up seems like a high proportion, but not insanely so.
Maybe LW could top out at 50,000 instead of 100,000 without changing its character.
Demographics point towards increasing numbers of potential LW posters, if only because the proportion of people who use the web recreationally are going up. I’m not sure whether the Flynn effect is relevant.
There are certainly 100,000 people as smart as the current community who might one day read Less Wrong. What seems unlikely is that we can expand without lowering barriers to entry. Lowering these barriers, it seems to me, would make it very difficult to restrict our recruitment to those who are in that 100,000.
I don’t think the Flynn effect is relevant for this analysis.
Speed of development (in places like India today and Eastern Europe in the early 00′s), spread of English as a second language and the changing cultural standards are I think are much more important reason for the rise in number of English speaking people who use the web recreationally. Improvements in environment don’t effect adult IQ much, as the nation develops and English spreads it stands to reason a previously hidden reservoir of potential users is a greater increase in numbers in absolute terms than the Flynn effect could hope to gather via children growing up with slightly higher IQs especially since many developing nations may be developing precisely because they are using up their demographic dividend.
The Flynn effect also clearly can’t continue forever, there are signs that its ending or has perhaps already ended in the developed world (I think Denmark has shown a stall in the Flynn effect for more than half a decade, there may be other examples but I don’t recall studies right now).
Tangent: If there is a desire to recruit more people to LW I’d like to see a concerted effort to target domain experts we don’t have. You start to get diminishing returns adding more computer programmers with amateur interests in hard science.
I’m doubtful LW will ever become really good at recruiting.
I went to the page with the Recent Posts and crunched some numbers. Of 250 recent posts, 35 were on meetups, 25 were continuous threads like Quotes or Potter Commentary, 28 were things that require you believe in our particular formulation of transhuman singularitarianism before the premise even makes sense, and 13 were meta about the site itself. That leaves a hundred fifty that could realistically draw new people in. Of those, 55 had <15 karma; ie we couldn’t even find fifteen people on this site who were willing to say they were any good and so newbies are unlikely to like them much. That leaves 95. Of those, about 25 were about specialized math-y topics that most IQ 130+ rationalists don’t even realize exist and have no background in, like decision theory. So that leaves maybe 70 posts since May that really in the running for attracting newbies. Many of these posts are in my opinion either so soft and vague as to be boring, or else filled with jargon.
Honestly, I’m surprised we’re getting new people at the rate we are.
I haven’t looked as closely at this as you but from what I can tell the site looks much better if we only examine promoted posts. Meta and AI posts rarely make the front page. What does fill up the front page are the meet-up announcements. I’m not sure what to do about that since promoting that aspect of the site does seem important. It wouldn’t looks so bad if we were getting more front page worthy rationality/science posts.
Does anyone other than me have a bunch of post ideas they’ve been meaning to write but keep procrastinating on? If there are a couple of us, I have an idea.
I’ll just say it: for reasons totally unrelated to recruitment I’d like to see considerably less of these. People are pretty good about voting down the FAI ones (which are nearly always horrendously sloppy). But my sense is that, in general, posts on transhumanist/singularity topics are noticeably less rigorous and less insightful than posts on other topics. Why this is, is a rather interesting question. My guess is the lack of a foil. With topics in science, philosophy and rationality there are established authorities with empirical work to bolster our position and professional studies for us to criticize and engage with. When I did high school debate my best matches were with debaters much more talented and experienced than I because I would rise to their level. Conversely I would sink to the level of my least talented opponents. The problem with transhumanist/singularity topics is that a) people here already agree with the most talented contributors in the related fields and b) many of the other contributors in those fields are quite mediocre relative to domain experts in more mainstream fields.
I believe LW is at it’s best when it is engaging and criticizing mainstream institutions and popular ways of doing things and trying to provide alternatives. This is particularly the case when the target is a representative of traditional rationality. Now of course there is a Less Wrong critique of the mainstream regarding singularity and transhuman topics it’s just a rather simple and straight forward one which has been covered dozens of times. It is simple because most people, institutions and systems of thought have made zero effort to engage with transhumanism and Singulatarianism. There is no foil.
I realize of course I have no where near the standing to dictate the way Less Wrong should be. But occasional months where AI was forbidden as a topic, as it was in the very beginning, would I think, rejuvenate our discussions.
And yes, I realize I’m the guy who posted about Pet Cryonics in the discussion section.
Thanks for this analysis Yvain. I’m glad you’re interested in this even if you are (rightfully) pessimistic.
I agree that the ongoing community dynamic is a terrible place to “jump into” LW. I’ve been wanting help from someone (any volunteers out there??) to help design a more friendly, more Wikipedia-ish homepage for LessWrong which could actually help non-hardcore LW users navigate the site in a way that makes sense. The promoted stream is such a horrible resource for that. If someone could make a mock-up of a potential LW homepage (on the LW wiki perhaps?), I could get it implemented for you if it’s even halfway respectable. Nothing could be worse than what we currently have. A good homepage would probably have a few stable pointers to the important sequences, a portion of the current promoted feed, a video or other tutorial explaining what LW is all about… I dunno. Just anything that isn’t an opaque listing of “Open threads!”, “Cryonics!”, “Torture scenarios!”, “Rationality Quotes!”, “Omega!”, “Meetups!”, “Cake!”
For what it’s worth, the thing that got me visiting here regularly was the list of EY’s OB posts.
I know y’all love the sequences as such, and I can understand why, but the fact remains that I was motivated to work my way through much of that material relatively systematically in a chronological format, but once I got to the end of that list—that is, the time of the OB-to-LW migration—I bogged down (1). The sequence/wiki style is less compelling to me than the chronological style, especially given the degree to which the posts themselves really are blog posts and not articles.
I suspect that having some sense of where the process terminates is a key aspect of that. If I don’t know how long the process is going to be, it’s harder to get up the energy to work through it.
Anyway, to the extent that appealing to people like me is a valuable subgoal(2), it seems what you should do is assemble a simple chronological list of LW posts from the dawn of the site that are most representative of what you’d like the site content to look like. (3)
== (1) To be fair, that was also the start of the Fun Theory sequence, which I am finding less compelling than some of its predecessors, so the content may bear some of the responsibility for my bogged-down-ness… but not all of it, nor even (I think) most of it.
(2) Which isn’t intended as false modesty; I just mean there’s no particular reason to believe that what appeals to me will appeal to anyone else.
(3) It may be sufficient to take the highest-voted tier of posts for each month, say, and string them together chronologically… though you’d probably want to backfill other posts that they depend on.
This is one of the few places on the net where cryonics related topics are discussed in a coherent manner, particularly by people in my age bracket. And I like that while it’s cryonics friendly, it’s not about cryonics. It’s about the (supposed) rational benefit of cryonics. Not just any cryonics related topic gets discussed, and not just in any way.
I can see how the community might grow to a point where the different topics cannot all be discussed in the same place. The current division between discussion and main is already helpful for distinguishing formal versus informal discussions. Perhaps sub-groups for cryonics, decision theory, existential risk, etc. are good ideas. But separating them out runs the risk of creating trivial inconvenience to cross-pollination of ideas.
Yes. Who is promoting these things, and why? What is their rationale?
An updated promotion policy that keeps meetup threads, discussion threads and obscure singularity-heavy (and other nominally off-topic) threads from being promoted could improve the situation.
A quick glance at the front page shows that 9 of the articles currently sitting on it are for meetups, 7 of which are over. I really think meet ups at the very least should be taken off the front page. I’d also favour at least half the horizontal width being used for pointers to places to dive into the sequences, and potentially even a featured article of the week/month.
This. This is the thing that has to be fixed before LessWrong can claim to primarily be about “refining the art of human rationality.”
AI is interesting, but not about rationality. Cryonics is interesting, but not about rationality. Nanotechnology is interesting, but not about rationality.
Ways of thinking are about rationality. Ways of being smart are about rationality. Ways of being stupid are about rationality. Stories of failing spectacularly are about rationality.
The first group may of course be highly relevant to the second. But it’s not about it, and requiring readers of a site advertised as being on the topic of “rationality” to buy into the standard transhumanist belief cluster is a failure of signaling rationality, and thus for these purposes a failure of instrumental rationality.
Before posting, don’t just think “Is this interesting to the LW readership?” but also “and is it on topic? Is my actual point about rationality?”
It’s an interesting site full of really smart people, and (and this is a real plus point) the comment quality is consistently high because people buy into the moderation system, i.e. “mod up if you want to see more comments like this.” That’s a BIG WIN. Keeps me reading.
Really. I strongly suggest you just write a main section post requesting people stay on-topic in posts, and that not clearly being on-topic about rationality is a reason to downvote. See what the community thinks.
I felt a bit out of place until I started reading MoR; what was all this cryonics/decision theory stuff?
A couple chapters in I thought, “THAT’S the kind of stuff I’m interested in talking about! Now I feel like I’m in the right place.”
I agree that generally most of the newer content we have is less interesting than the older, but maybe that’s because the older already covers a huge amount of LW’s subject base pretty well. The good thing is, the old content is all still here and good for attracting new readers, if new readers are what we care about. That said, they could be better promoted, easier to find, navigate and search.
Of course, speaking selfishly I’d really like it if we had new content that was as solid and useful as Eliezer’s work, and some of the other ‘classic’ content. Perhaps we need more LW people from different backgrounds than we currently cover?
I think the logical next step is to translate the sequences into a book or several books of some kind. I vaguely remember EY talking about something about writting a book on rationality, but I could be mistaken. This might make LW very good at recruiting new members compared to now it the book is successful and well read for years to come.
Everyone who’s actually read the sequences—and that’s a lot of reading, too much for almost any casual reader—should try summarising and condensing them themselves for their own circle of net-friends. That’s how to get a meme out there. You’ll be a hell of a lot more convincing to people you know if you’re saying something than if you point to someone else saying something.
Eliezer is writing the Sequences into a rationality book, but I agree with David Gerard’s suggestion that LW readers should try summarizing his ideas in their own words. This could be an important topic for discussion here on LW: making sure that some of the core ideas discussed on this site aren’t lost in translation.
Yes, that’s being done.
Honestly, whenever I read through Omega-related posts, I feel like we might be trying to re-invent Calvinist predestination theology. These sort of paradoxes of free will have been hashed out in monastaries and seminaries for almost two millenia, by intelligent people who were as rational as their understanding of the universe allowed. I wonder if even a theology student has something to contribute here.
Hey, you’re relatively new. How did you end up here?
Someone linked to the paperclip maximizer wiki page in a post on reddit.
It’s an interesting idea, but we might need more emphasis on becoming more rational to improve your life (rather than on FAI) to attract them.
This relates to something else I was thinking about—at this point, LW is a pretty small social group. If there were 100,000 active members, which I think is theoretically possible, then LW would develop substructures of some sort.
Less Wrong doesn’t seem to be on Face Book. Should it be?
Less Wrong on Facebook
Agreed. Or perhaps rationality as a way of improving science?
Thats a terrifyingly large number to think about. Do you think 100,000 is active while maintaining the current median IQ? Or without the signal to noise ratio dropping significantly?
The structure of the site would have to change significantly. We’d have to live with more specialization, no one would be able to follow all of that content.
Let’s run some casual analysis on intelligence level and LW.
My impression is that my intelligence level is between 1 in 1000 and 1 in 10,000. It isn’t hard for me to find compatible people in science fiction fandom. I think I’d resist arguments that it’s below 1 in 500 unless there was very good evidence.
I fit in well here, but am not of the top rank, partly due to lack of math and partly due to lack of ambition. It’s possible that even if I had more desire to write top level posts, I still wouldn’t be as good at it as Alicorn.
I don’t bother trying to follow the strategic maneuvering in MOR (I suspect I’m not alone in this), and appreciated the bit recently where Harry lost track. Sorry—don’t remember the details.
I haven’t found a lot of blogs where I’d say that the intellectual level is as high as LW, but I can recommend Making Light. The posters are very high on the verbal skills side and have respect for rational argument.
So, let’s start with 500 million minimally potential LW users. One in a thousand of them would be 500,000.
1 in 5 of them showing up seems like a high proportion, but not insanely so.
Maybe LW could top out at 50,000 instead of 100,000 without changing its character.
Demographics point towards increasing numbers of potential LW posters, if only because the proportion of people who use the web recreationally are going up. I’m not sure whether the Flynn effect is relevant.
There are certainly 100,000 people as smart as the current community who might one day read Less Wrong. What seems unlikely is that we can expand without lowering barriers to entry. Lowering these barriers, it seems to me, would make it very difficult to restrict our recruitment to those who are in that 100,000.
It’s complicated—I’d say that for current members to keep promoting it in their social circles will tend to maintain quality. And MOR Is helpful, too.
Just expanding for its own sake wouldn’t be a good idea.
Figuring out a more efficient way of teaching rationality than the sequences (I hope Eliezer’s book will be that) would be a very good thing.
I don’t think the Flynn effect is relevant for this analysis.
Speed of development (in places like India today and Eastern Europe in the early 00′s), spread of English as a second language and the changing cultural standards are I think are much more important reason for the rise in number of English speaking people who use the web recreationally. Improvements in environment don’t effect adult IQ much, as the nation develops and English spreads it stands to reason a previously hidden reservoir of potential users is a greater increase in numbers in absolute terms than the Flynn effect could hope to gather via children growing up with slightly higher IQs especially since many developing nations may be developing precisely because they are using up their demographic dividend.
The Flynn effect also clearly can’t continue forever, there are signs that its ending or has perhaps already ended in the developed world (I think Denmark has shown a stall in the Flynn effect for more than half a decade, there may be other examples but I don’t recall studies right now).