This post presents one lens I (Ruby) use to think about LessWrong 2.0 and what we’re trying to accomplish. While it does not capture everything important, it does capture much and explains a little how our disparate-seeming projects can combine into a single coherent vision.
(While the stated purpose of LessWrong is to be a place to learn and apply rationality, as with any minimally specified goal, we could go about pursuing this goal in multiple ways. In practice, myself and other members of the LessWrong team care about intellectual progress, truth, existential risk, and the far-future and these broader goals drive our visions and choices for LessWrong.)
A Goal for LessWrong
Here is one goal that it might make sense to me for LessWrong to have:
A goal of LessWrong is to grow and sustain a community of aligned* members who are well-trained and well-equipped with the right tools and community infrastructure to make progress on the biggest problems facing humanity, with a special focus on the intellectual problems.
*sharing our broad values of improving the world, ensuring the long-term future is good, etc.
Things not well-expressed in this goal:
LessWrong’s core focus on rationality and believing true things.
I want to point out that while the above might be a goal of the LessWrong team, that doesn’t mean it has to be a goal of our users. I wholeheartedly welcome users who come to LessWrong for their own purposes such as improving their personal rationality, learning interesting things, getting feedback on their ideas, being entertained by stimulating ideas, or participating socially in a community they like.
My RTC-P Framework
The reason I like the goal expressed above is that provides unity to a wide range of activities we devote resources to. I like to group those activities into four overarching categories.
Recruitment: attracting new aligned and capable members to our community (and creating a funnel into EA orgs and projects)
Training: providing means both new and existing members improve their skills, knowledge, and effectiveness.
Community/Culture: we aim to improve community health and flourishing via means such encouraging good epistemic norms, affordance to interact, e.g. meetups and conferences.
[Intellectual] Progress: we work to provide the LW platform and other tools that assist community members to contribute progress on the challenging intellectual problems we face.
Recruitment
LessWrong has a non-trivial presence on the Internet. In the last 12 months, LessWrong has seen on average over 100k visitors each month [1]. Admittedly, many of these arrive due to low-relevance Google searches, however over 25k each month are navigating directly to the lesswrong.com domain. Depending on the month, several hundred to several thousand arrive each from SlateStarCodex and Hacker News. There are several hundred to a thousand unique pageviews of the openings posts of Rationality: A-Z each month. Around 500 more views of the first chapter of HPMOR.
The meaning of this is that a relatively large number of people are probably being exposed to the ideas of the LessWrong, Rationality, and Effective Altruism communities for the first time when they encounter LessWrong. LessWrong has the opportunity to spread its ideas, but more importantly, there is scope here for us to onboard new capable and aligned people in our community.
The team has recently been building things to help new visitors have an experience conducive to becoming a member of our community. The homepage has been redesigned to present recommendations of our core readings and other top posts to new users. We’ve also written a new welcome post and FAQ covering how the site works, what it’s about, and how to get up to speed.
Something else we might do is start posing content outside of LessWrong to make people aware of what is on the site. We could create a “newsletter” collection of content (a mix of the best recent and classic posts) and share this via a Facebook page, relevant places on Reddit, Twitter, etc. This might also help us draw back some past users who dropped off during the great decline of 2015-2016.
Of course, recruitment doesn’t consist solely of your first moments being exposed online. There is a “funnel” as you progress through getting up to speed on the community’s knowledge and culture, your first experiences engaging with people on the site (your first comments and posts), attending in-person meetups (these were significant for me), and so on. These are all steps by which someone becomes part of our band trying to do good things.
Indeed, if want to be a community of people trying to do good, important things, then it’s important we have an apparatus for having new people join. Recruitment. (I have the belief that if you are not growing, at least somewhat, then you are shrinking.) It’s not clear that we need to grow a lot, even 2x-5x might be sufficient. Certainly we should not grow at the expense of our culture and values.
Fortunately, LessWrong is a nonprofit which helps with the incentives.
Training
LessWrong was not build to be solely a site for entertainment, recreation, or passive reading. The goal was always to improve: to think better, know more, and accomplish more. Tsuyoku Naritai is an emblematic post. The concept of a rationality dojo caught on. The goal is to do better individually and collectively.
LessWrong’s archive of rationality posts constitute considerable training material. LessWrong has over 23k posts with non-negative karma scores. Noteworthy authors include Eliezer_Yudkowsky (1021 posts) , Scott Alexander (230), Lukeprog (416), Kaj_Sotala (221), AnnaSalamon (68), So8res (51), Academian (37), and many others.
Effort has been invested to create easily accessible sequences of posts with convenience features such as Previous/Next buttons and Continue Reading suggestions on the homepage. The recently launched recommendation system suggests to users which posts they might be interested in and benefit from. For example, here are some classics you might have missed:
Another idea, costly to implement, but of which the team is fond of is to building LessWrong into a “textbook” where there are exercises which increase comprehension and retention.
And, although it doesn’t happen on the site, I’d count all the rationality practice that LessWrong members get together and perform at their in-person meetups as part of the training caused by LessWrong. There are 101 posts on LessWrong with dojo in the title, marking them as meetups where people intended to get together and practice. Sampling from those posts, these are meetups where people worked on calibration, urge propagation, non-violent communication, growth mindset, statistics, Bayesian Reasoning, Intuitive Bayes, Ideological Turing Tests, difficult conversations, Hamming prompts, stress, and memory.
Community & Culture
As online forum, LessWrong naturally forms a community whose members share a culture and exchange ideas. Beyond the online, the LessWrong has caused many in-person, offline communities to exists. In the past year, there were LessWrong meetups in thirty one countries. There have been ~3,576 meetups made on LessWrong (238 of these were SSC or LW/SSC combined meetups). Notably, the Bay Area Rationalist community exists in large part due to LessWrong, even if it is now somewhat separate. Other communities which have historically gained notice are the New York, Seattle, and Melbourne groups. Large area “mega-meetups” have been held in the US East Coast, Canada, Australia, and Europe. There is a thriving community rationalist/LessWrong community in Moscow.
Even with in-person communities already existing, I still see plenty of place for LessWrong to continue to bolster both online and offline community. For the offline world, we could provide support materials as in the past, funding (as CEA does to EA local groups), further our meetup coordination infrastructure, or host LessWrong conferences.
Culturally, LessWrong is defined by its epistemic norms, focus on truth, and openness to unconventional ideas. The community share a distinctive body of knowledge and set of tools for thinking clearly. The core culture was established by Eliezer’s Sequences shaping the approach to belief, reason, explanation, truth and evidence, the use of language, and the practice of changing one’s mind. As far as I know, the commitment to clear thinking and good communication present thinking on LessWrong is unparalleled by that on any other public place on the Internet.
A goal of LessWrong is keeping this culture strong and being wary of any changes which could dilute this most valuable aspect of our community, e.g. promoting growth without ensuring new members are properly inculturated.
At present, standards are in part being kept high by active and careful moderation. Some may note that the discussion on LessWrong presently more constructive and civil in tone than at times in the past. This is evident when looking at the style of commenter GPT2 compared to those it was conversing with. GPT2, trained on the entire historical comment corpus, has a noticeably more condescending and contrarian tone than the comments typical on modern LessWrong.
Intellectual Progress
This category is arguably too broad, but that perhaps captures the fact that LessWrong 2.0 is open to quite a wide range of projects in the pursuit of further intellectual progress, (or even just progress, intellectual or otherwise).
The LessWrong forum with posts, comments, and votes is already a technology for intellectual progress which allows thinkers to share ideas, get feedback, and build upon each others work. The team spends a lot of time thinking about what we could build to help people be more intellectually generative. The team has ongoing debates about what the sections of the site should be (“bringing back something like Main vs Discussion?” “Ah, but the problems!”), whether and how to promote the sharing of unpolished ideas (shortform feeds? these have been gaining in popularity without explicit support), or can we set up our own peer review process. Hearteningly, the goal is always to generate more “good content”, not merely to drive-up content production and activity. Growth, if not exactly feared, is viewed with suspicion—perhaps it will dilute quality. Already some on the team fear that things trend too much towards insight porn rather than substantive contributions to the intellectual commons.
It’s worth noting that the LessWrong 2.0 team invests efforts to promote intellectual progress outside of the lesswrong.comdomain. The Effective Altruism Forum runs on the LessWrong codebase (and receives some support from the team). And last year the LessWrong team launched the AI Alignment Forum: a forum specifically for dedicated AI safety researchers. (It is no secret the LessWrong 2.0 team members especially wish to see progress made on the intellectual problems of AI safety.)
One of the ideas for increasing intellectual progress which the team has been especially occupied with recently is that of an Open Questions platform. One of many functions, such a platform would be place where the community coordinates on which problems are most important and creates surface area so more researchers can contribute.
Other ideas for things the LessWrong 2.0 team could build to drive intellectual progress are: an optimized collaborative tool (like Google Docs, but better); a marketplace for intellectual labor (think Craigslist/TaskRabbit), a prediction market platform, and a researcher training program. I have written more about these ideas in LW2.0: Technology Platform for Intellectual Progress.
LW2.0: Community, Culture, and Intellectual Progress
This post presents one lens I (Ruby) use to think about LessWrong 2.0 and what we’re trying to accomplish. While it does not capture everything important, it does capture much and explains a little how our disparate-seeming projects can combine into a single coherent vision.
I describe a complimentary lens in LessWrong 2.0: Technology Platform for Intellectual Progress.
(While the stated purpose of LessWrong is to be a place to learn and apply rationality, as with any minimally specified goal, we could go about pursuing this goal in multiple ways. In practice, myself and other members of the LessWrong team care about intellectual progress, truth, existential risk, and the far-future and these broader goals drive our visions and choices for LessWrong.)
A Goal for LessWrong
Here is one goal that it might make sense to me for LessWrong to have:
A goal of LessWrong is to grow and sustain a community of aligned* members who are well-trained and well-equipped with the right tools and community infrastructure to make progress on the biggest problems facing humanity, with a special focus on the intellectual problems.
*sharing our broad values of improving the world, ensuring the long-term future is good, etc.
Things not well-expressed in this goal:
LessWrong’s core focus on rationality and believing true things.
What LessWrong aims to provide users.
(These are better expressed in the About/Welcome page.)
I want to point out that while the above might be a goal of the LessWrong team, that doesn’t mean it has to be a goal of our users. I wholeheartedly welcome users who come to LessWrong for their own purposes such as improving their personal rationality, learning interesting things, getting feedback on their ideas, being entertained by stimulating ideas, or participating socially in a community they like.
My RTC-P Framework
The reason I like the goal expressed above is that provides unity to a wide range of activities we devote resources to. I like to group those activities into four overarching categories.
Recruitment: attracting new aligned and capable members to our community (and creating a funnel into EA orgs and projects)
Training: providing means both new and existing members improve their skills, knowledge, and effectiveness.
Community/Culture: we aim to improve community health and flourishing via means such encouraging good epistemic norms, affordance to interact, e.g. meetups and conferences.
[Intellectual] Progress: we work to provide the LW platform and other tools that assist community members to contribute progress on the challenging intellectual problems we face.
Recruitment
LessWrong has a non-trivial presence on the Internet. In the last 12 months, LessWrong has seen on average over 100k visitors each month [1]. Admittedly, many of these arrive due to low-relevance Google searches, however over 25k each month are navigating directly to the lesswrong.com domain. Depending on the month, several hundred to several thousand arrive each from SlateStarCodex and Hacker News. There are several hundred to a thousand unique pageviews of the openings posts of Rationality: A-Z each month. Around 500 more views of the first chapter of HPMOR.
The meaning of this is that a relatively large number of people are probably being exposed to the ideas of the LessWrong, Rationality, and Effective Altruism communities for the first time when they encounter LessWrong. LessWrong has the opportunity to spread its ideas, but more importantly, there is scope here for us to onboard new capable and aligned people in our community.
The team has recently been building things to help new visitors have an experience conducive to becoming a member of our community. The homepage has been redesigned to present recommendations of our core readings and other top posts to new users. We’ve also written a new welcome post and FAQ covering how the site works, what it’s about, and how to get up to speed.
Something else we might do is start posing content outside of LessWrong to make people aware of what is on the site. We could create a “newsletter” collection of content (a mix of the best recent and classic posts) and share this via a Facebook page, relevant places on Reddit, Twitter, etc. This might also help us draw back some past users who dropped off during the great decline of 2015-2016.
Of course, recruitment doesn’t consist solely of your first moments being exposed online. There is a “funnel” as you progress through getting up to speed on the community’s knowledge and culture, your first experiences engaging with people on the site (your first comments and posts), attending in-person meetups (these were significant for me), and so on. These are all steps by which someone becomes part of our band trying to do good things.
Indeed, if want to be a community of people trying to do good, important things, then it’s important we have an apparatus for having new people join. Recruitment. (I have the belief that if you are not growing, at least somewhat, then you are shrinking.) It’s not clear that we need to grow a lot, even 2x-5x might be sufficient. Certainly we should not grow at the expense of our culture and values.
Fortunately, LessWrong is a nonprofit which helps with the incentives.
Training
LessWrong was not build to be solely a site for entertainment, recreation, or passive reading. The goal was always to improve: to think better, know more, and accomplish more. Tsuyoku Naritai is an emblematic post. The concept of a rationality dojo caught on. The goal is to do better individually and collectively.
LessWrong’s archive of rationality posts constitute considerable training material. LessWrong has over 23k posts with non-negative karma scores. Noteworthy authors include Eliezer_Yudkowsky (1021 posts) , Scott Alexander (230), Lukeprog (416), Kaj_Sotala (221), AnnaSalamon (68), So8res (51), Academian (37), and many others.
Effort has been invested to create easily accessible sequences of posts with convenience features such as Previous/Next buttons and Continue Reading suggestions on the homepage. The recently launched recommendation system suggests to users which posts they might be interested in and benefit from. For example, here are some classics you might have missed:
Humans are not automatically strategic
Attempted Telekinesis
Ugh fields
Ask and Guess
The Neglected Virtue of Scholarship
Checklist of Rationality Habits
You can see the list of the ten most upvoted LessWrong posts for each year 2010-2018. We hope to have users not just reading recent content, but our most valuable content from all time.
Another idea, costly to implement, but of which the team is fond of is to building LessWrong into a “textbook” where there are exercises which increase comprehension and retention.
And, although it doesn’t happen on the site, I’d count all the rationality practice that LessWrong members get together and perform at their in-person meetups as part of the training caused by LessWrong. There are 101 posts on LessWrong with dojo in the title, marking them as meetups where people intended to get together and practice. Sampling from those posts, these are meetups where people worked on calibration, urge propagation, non-violent communication, growth mindset, statistics, Bayesian Reasoning, Intuitive Bayes, Ideological Turing Tests, difficult conversations, Hamming prompts, stress, and memory.
Community & Culture
As online forum, LessWrong naturally forms a community whose members share a culture and exchange ideas. Beyond the online, the LessWrong has caused many in-person, offline communities to exists. In the past year, there were LessWrong meetups in thirty one countries. There have been ~3,576 meetups made on LessWrong (238 of these were SSC or LW/SSC combined meetups). Notably, the Bay Area Rationalist community exists in large part due to LessWrong, even if it is now somewhat separate. Other communities which have historically gained notice are the New York, Seattle, and Melbourne groups. Large area “mega-meetups” have been held in the US East Coast, Canada, Australia, and Europe. There is a thriving community rationalist/LessWrong community in Moscow.
Even with in-person communities already existing, I still see plenty of place for LessWrong to continue to bolster both online and offline community. For the offline world, we could provide support materials as in the past, funding (as CEA does to EA local groups), further our meetup coordination infrastructure, or host LessWrong conferences.
Culturally, LessWrong is defined by its epistemic norms, focus on truth, and openness to unconventional ideas. The community share a distinctive body of knowledge and set of tools for thinking clearly. The core culture was established by Eliezer’s Sequences shaping the approach to belief, reason, explanation, truth and evidence, the use of language, and the practice of changing one’s mind. As far as I know, the commitment to clear thinking and good communication present thinking on LessWrong is unparalleled by that on any other public place on the Internet.
A goal of LessWrong is keeping this culture strong and being wary of any changes which could dilute this most valuable aspect of our community, e.g. promoting growth without ensuring new members are properly inculturated.
At present, standards are in part being kept high by active and careful moderation. Some may note that the discussion on LessWrong presently more constructive and civil in tone than at times in the past. This is evident when looking at the style of commenter GPT2 compared to those it was conversing with. GPT2, trained on the entire historical comment corpus, has a noticeably more condescending and contrarian tone than the comments typical on modern LessWrong.
Intellectual Progress
This category is arguably too broad, but that perhaps captures the fact that LessWrong 2.0 is open to quite a wide range of projects in the pursuit of further intellectual progress, (or even just progress, intellectual or otherwise).
The LessWrong forum with posts, comments, and votes is already a technology for intellectual progress which allows thinkers to share ideas, get feedback, and build upon each others work. The team spends a lot of time thinking about what we could build to help people be more intellectually generative. The team has ongoing debates about what the sections of the site should be (“bringing back something like Main vs Discussion?” “Ah, but the problems!”), whether and how to promote the sharing of unpolished ideas (shortform feeds? these have been gaining in popularity without explicit support), or can we set up our own peer review process. Hearteningly, the goal is always to generate more “good content”, not merely to drive-up content production and activity. Growth, if not exactly feared, is viewed with suspicion—perhaps it will dilute quality. Already some on the team fear that things trend too much towards insight porn rather than substantive contributions to the intellectual commons.
It’s worth noting that the LessWrong 2.0 team invests efforts to promote intellectual progress outside of the lesswrong.com domain. The Effective Altruism Forum runs on the LessWrong codebase (and receives some support from the team). And last year the LessWrong team launched the AI Alignment Forum: a forum specifically for dedicated AI safety researchers. (It is no secret the LessWrong 2.0 team members especially wish to see progress made on the intellectual problems of AI safety.)
One of the ideas for increasing intellectual progress which the team has been especially occupied with recently is that of an Open Questions platform. One of many functions, such a platform would be place where the community coordinates on which problems are most important and creates surface area so more researchers can contribute.
Other ideas for things the LessWrong 2.0 team could build to drive intellectual progress are: an optimized collaborative tool (like Google Docs, but better); a marketplace for intellectual labor (think Craigslist/TaskRabbit), a prediction market platform, and a researcher training program. I have written more about these ideas in LW2.0: Technology Platform for Intellectual Progress.