The LessWrong team is currently thinking a lot about what happens with new users: both the bar of their contributions being accepted, how we deliver feedback and restriction of not good contributions, but also most importantly, how we get them onboarded onto the site
This is a draft of a document we’d present to new users to help them understand what LessWrong is about. I’m interested in early community feedback about whether I’m hitting the right notes here before investing a lot more in it.
This document also references another post that’s something of more of a list of norms, akin to Basics of Rationalist Discourse, though (1) I haven’t written that yet, (2) I’m much less certain about the shape or nature of it. I’ll share a post or draft about that too soon.
This document is aimed at new users but may also be a useful reference for established users. It elaborates on the about page.
The Core of LessWrong: Rationality
LessWrong is an online forum and community that was built around the goal improving human reasoning and decision-making. The community believes there are ways of thinking, that if you figure them out and adopt them, you can become a person who systematically[1] arrives at true beliefs and good decisions more of the time than someone who didn’t adopt those ways of thinking. Around here, the short word for “systematically arriving at truth, etc.” is rationality, and that’s at the core of this site.
More than that, LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinctive style from the rest of Internet.
Some of the features that set LessWrong apart:
We treat beliefs as being about shaping your anticipations of what you’ll observe[2]
The goal of our conversations is to figure out what’s true, not win arguments
people focus on what would change their mind
it’s common to acknowledge when their someone they are debating has made a convincing point, or has even outright convinced them entirely
We are very Bayesian
Rather than treating belief as binary, we use probabilistic credences to express our certainty/uncertainty. Rather than say that is “extremely unlikely”, we’d say “I think there’s a 1% chance or lower of it happening”.
We are interested in Bayesian evidencefor a hypothesis, i.e. anything that seems more likely if the belief is true rather than false.
Philosophical Heritage: The Sequences
Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality (collectively those blog posts are called The Sequences). In 2009, Eliezer founded LessWrong as a community forum for the people who were attracted to his ideas and worldview.
While not everyone on the site agrees with everything Eliezer says, The Sequences (also known as Rationality: AI to Zombies) is the foundational cultural/values document of LessWrong. To understand LessWrong and participate well (and also for your own reasoning ability), we strongly encourage you to read the Sequences.
The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion. − 12 Virtues of Rationality
We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.
Artificial Intelligence
If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
The LessWrong team, who maintain and develop the site, are predominantly motivated by trying to cause powerful AI outcomes to be good.
Even if you found your way to LessWrong because of your interest in AI, it’s important for you to be aware of the site’s focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.
How to get started
<TO-DO>
read sequences, codex, etc
read top posts on tags of interest (concepts page)
request reading recommendations in Open Thread
read stuff in general
attend a local meetup
not necessarily a tonne of this, but if it’s your first day on LessWrong, you’ll be missing <something>
</TO-DO>
How to ensure your first post or comment is approved
This is a hard section to write. The new users who need to read it least are more likely to spend time worrying the below, and those who need it most are likely to ignore it. Don’t stress too hard. If you submit it and we don’t like it, we’ll give you some feedback.
A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.
There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking it, might be rejected and you’ll get feedback on why.
Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if:
You demonstrate understanding of LessWrong rationality fundamentals. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc.
You write a clear introduction. If your first submission is lengthy, i.e. a long post, it’s more likely to get quickly approved if the site moderators can quickly understand what you’re trying to say rather than having to delve deep into your post to figure it out. Once you’re established on the site and people know that you have good things to say, you can pull off having a “literary” opening that doesn’t start with the main point.
Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it’s clear you’re aware of prior relevant discussion and are building upon on it. It’s not a big deal if you weren’t aware, there’s just a chance the moderator team will reject your submission and point you to relevant material.
This doesn’t mean that you can’t question positions commonly held on LessWrong, just that it’s a lot more productive for everyone involved if you’re able to respond to or build upon the existing arguments, e.g. showing why you think they’re wrong.
Address the LessWrong audience
A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There’s nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particular interesting or insightful, nor demonstrate an interest in LessWrong’s culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).
It’s good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts is good).
Aim for a high standard if you’re contributing on the topic AI
As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don’t think your AI-related contribution is particularly valuable and it’s not clear you’ve tried to understand the site’s culture or values, then it’s possible we’ll reject it.
A longer list of guidelines on LessWrong can be found here [Link]
Don’t worry about it too hard.
It’s ok if we don’t like your first submission, we can just give you feedback. In many ways, the bar isn’t that high. As I wrote above, this document is so not being approved on your first submission doesn’t come as a surprise. If you’re writing a comment and not a 5,000 word post, don’t stress about it.
If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one [link] or AI one [link]. That’s a good place to say “I have an idea about X, does LessWrong have anything on that already?”
[Feedback please] New User’s Guide to LessWrong
The LessWrong team is currently thinking a lot about what happens with new users: both the bar of their contributions being accepted, how we deliver feedback and restriction of not good contributions, but also most importantly, how we get them onboarded onto the site
This is a draft of a document we’d present to new users to help them understand what LessWrong is about. I’m interested in early community feedback about whether I’m hitting the right notes here before investing a lot more in it.
This document also references another post that’s something of more of a list of norms, akin to Basics of Rationalist Discourse, though (1) I haven’t written that yet, (2) I’m much less certain about the shape or nature of it. I’ll share a post or draft about that too soon.
This document is aimed at new users but may also be a useful reference for established users. It elaborates on the about page.
The Core of LessWrong: Rationality
LessWrong is an online forum and community that was built around the goal improving human reasoning and decision-making. The community believes there are ways of thinking, that if you figure them out and adopt them, you can become a person who systematically[1] arrives at true beliefs and good decisions more of the time than someone who didn’t adopt those ways of thinking. Around here, the short word for “systematically arriving at truth, etc.” is rationality, and that’s at the core of this site.
More than that, LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinctive style from the rest of Internet.
Some of the features that set LessWrong apart:
We treat beliefs as being about shaping your anticipations of what you’ll observe[2]
The goal of our conversations is to figure out what’s true, not win arguments
people focus on what would change their mind
it’s common to acknowledge when their someone they are debating has made a convincing point, or has even outright convinced them entirely
We are very Bayesian
Rather than treating belief as binary, we use probabilistic credences to express our certainty/uncertainty. Rather than say that is “extremely unlikely”, we’d say “I think there’s a 1% chance or lower of it happening”.
We are interested in Bayesian evidence for a hypothesis, i.e. anything that seems more likely if the belief is true rather than false.
Philosophical Heritage: The Sequences
Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality (collectively those blog posts are called The Sequences). In 2009, Eliezer founded LessWrong as a community forum for the people who were attracted to his ideas and worldview.
While not everyone on the site agrees with everything Eliezer says, The Sequences (also known as Rationality: AI to Zombies) is the foundational cultural/values document of LessWrong. To understand LessWrong and participate well (and also for your own reasoning ability), we strongly encourage you to read the Sequences.
The original sequences were ~700 blog posts.
Rationality: A-Z was an edited and distilled version compiled in 2015 of ~400 posts.
Highlights from the Sequences is 50 top posts from the Sequences suggested a quick place to start
Topics other than Rationality
We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.
Artificial Intelligence
If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
LessWrong is also integrated with the Alignment Forum
The LessWrong team, who maintain and develop the site, are predominantly motivated by trying to cause powerful AI outcomes to be good.
Even if you found your way to LessWrong because of your interest in AI, it’s important for you to be aware of the site’s focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.
How to get started
<TO-DO>
read sequences, codex, etc
read top posts on tags of interest (concepts page)
request reading recommendations in Open Thread
read stuff in general
attend a local meetup
not necessarily a tonne of this, but if it’s your first day on LessWrong, you’ll be missing <something>
</TO-DO>
How to ensure your first post or comment is approved
This is a hard section to write. The new users who need to read it least are more likely to spend time worrying the below, and those who need it most are likely to ignore it. Don’t stress too hard. If you submit it and we don’t like it, we’ll give you some feedback.
A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.
There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking it, might be rejected and you’ll get feedback on why.
Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if:
You demonstrate understanding of LessWrong rationality fundamentals. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc.
You write a clear introduction. If your first submission is lengthy, i.e. a long post, it’s more likely to get quickly approved if the site moderators can quickly understand what you’re trying to say rather than having to delve deep into your post to figure it out. Once you’re established on the site and people know that you have good things to say, you can pull off having a “literary” opening that doesn’t start with the main point.
Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it’s clear you’re aware of prior relevant discussion and are building upon on it. It’s not a big deal if you weren’t aware, there’s just a chance the moderator team will reject your submission and point you to relevant material.
This doesn’t mean that you can’t question positions commonly held on LessWrong, just that it’s a lot more productive for everyone involved if you’re able to respond to or build upon the existing arguments, e.g. showing why you think they’re wrong.
Address the LessWrong audience
A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There’s nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particular interesting or insightful, nor demonstrate an interest in LessWrong’s culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).
It’s good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts is good).
Aim for a high standard if you’re contributing on the topic AI
As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don’t think your AI-related contribution is particularly valuable and it’s not clear you’ve tried to understand the site’s culture or values, then it’s possible we’ll reject it.
A longer list of guidelines on LessWrong can be found here [Link]
Don’t worry about it too hard.
It’s ok if we don’t like your first submission, we can just give you feedback. In many ways, the bar isn’t that high. As I wrote above, this document is so not being approved on your first submission doesn’t come as a surprise. If you’re writing a comment and not a 5,000 word post, don’t stress about it.
If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one [link] or AI one [link]. That’s a good place to say “I have an idea about X, does LessWrong have anything on that already?”
Helpful Tips <to-do>
FAQ
Intercom
OpenThreads
LessWrong moderator’s tool kit.
This means you won’t necessarily do better on every occasion, but that on average you will.
As opposed to beliefs being for signaling group affiliation and having pleasant feelings.