New User’s Guide to LessWrong

The road to wisdom? Well, it’s plain
and simple to express:

Err
and err
and err again
but less
and less
and less.

– Piet Hein

Why a new user guide?

Although encouraged, you don’t have to read this to get started on LessWrong!

LessWrong is a pretty particular place. We strive to maintain a culture that’s uncommon for web forums[1] and to stay true to our values. Recently, many more people have been finding their way here, so I (lead admin and moderator) put together this intro to what we’re about.

My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn’t the place for you, this guide will help you have a good “visit” or simply seek other pastures.

Contents of this page/​email

If you arrived here out of interest in AI, make sure to read the section on LessWrong and Artificial Intelligence.

What LessWrong is about: “Rationality”

LessWrong is online forum/​community that was founded with the purpose of perfecting the art of human[2] rationality.

While truthfulness is a property of beliefs, rationality is a property of reasoning processes. Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process. For example, a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what’s convenient to believe. An aspiring rationalist[4] is someone who aspires to improve their own reasoning process to arrive at truth more often.

...a rationalist isn’t just somebody who respects the Truth...All too many people respect the Truth. A rationalist is somebody who respects the processes of finding truth.Rationality: Appreciating Cognitive Algorithms

[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because because they’re sitting atop a pile of utility. – Rationality is systematized winning

The Art [of rationality] must have a purpose other than itself, or it collapses into infinite recursion. - the 11th virtue of rationality

On LessWrong we attempt (though don’t always succeed) to apply the rationality lessons we’ve accumulated to any topic that interests us, and especially topics that seem important, like how to make the world a better place. We don’t just care about truth in the abstract, but care about having true beliefs about things we care about so that we can make better and more successful decisions.

Right now, AI seems like one of the most (or the most) important topics for humanity. It involves many tricky questions, high stakes, and uncertainty in an unprecedented situation. On LessWrong, many users are attempting to apply their best thinking to ensure that the advent of increasingly powerful AI goes well for humanity.[5]

Is LessWrong for you?

LessWrong is a good place for someone who:

  • values curiosity, learning, self-improvement, figuring out what’s actually true (rather than just what you want to be true or just winning arguments)

  • will change their mind or admit they’re wrong in response to compelling evidence or argument

  • wants to work collaboratively with others to figure out what’s true

  • likes acknowledging and quantifying uncertainty and applying lessons from probability, statistics, and decision theory to your reasoning

  • is nerdy and interested in all questions of how the world works and who is not afraid to reach weird conclusions if the arguments seem valid

  • likes to be pedantic and precise, and likes to bet on their beliefs

  • doesn’t mind reading a lot

If many of these apply to you, then LessWrong might be the place for you.

LessWrong has been getting more attention (e.g. we get linked in major news articles somewhat regularly these days), and so have many more people showing up on the site. We, the site moderators, don’t take for granted that what makes our community special won’t stay that way without intentional effort, so we are putting more effort into tending to our well-kept garden.

If you’re on board with our program and will help make our community more successful at its goals, then welcome!

Okay, what are some examples of what makes LessWrong different?

I just had a crazy experience. I think I saw someone on the internet have a productive conversation.

I was browsing this website (lesswrong.com, from the guy who wrote that Harry Potter fanfiction I’ve been into), and two people were arguing back and forth about economics, and after like 6 back and forths one of them just said “Ok, you’ve convinced me, I’ve changed my mind”.

Has this ever happened on the internet before?

– paraphrased and translated chatlog (from german) by Habryka to a friend of his, circa 2013-2014

The LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinct style from the rest of Internet.

Some of the features that set LessWrong apart:

  • We applaud you for saying “oops”

  • We treat beliefs as being about shaping your anticipations of what you’ll observe[6]

  • The goal of our conversations is to figure out what’s true, not to win arguments

    • we try to focus on what would change our minds

    • it’s common for us to acknowledge when someone we are debating has made a convincing point, or has even outright convinced us entirely

  • We are very Bayesian

    • Rather than treating belief as binary, we use probabilistic credences to express our certainty/​uncertainty. Rather than say that is “extremely unlikely”, we’d say “I think there’s a 1% chance or lower of it happening”.

    • We are interested in Bayesian evidence for or against a hypothesis, i.e. observations we are more or less likely to make if the hypothesis is true vs not.

  • We avoid certain behaviors that seem to make conversation worse on the rest of the Internet.

  • And strive for other things that make conversations better.

Philosophical Heritage: The Sequences

“I re-read the Sequences”, they tell me, “and everything in them seems so obvious. But I have this intense memory of considering them revelatory at the time.”

This is my memory as well. They look like extremely well-written, cleverly presented version of Philosophy 101. And yet I distinctly remember reading them after I had gotten a bachelor’s degree magna cum laude in Philosophy and being shocked and excited by them. – Scott Alexander in Five Years and One Week of Less Wrong

Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/​beliefs/​models about rationality[7]; collectively those blog posts are called The Sequences. In 2009, Eliezer founded LessWrong as a community forum for the people who’d liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated.

If you go to a math conference, people will assume familiarity with calculus; the literature club likely expects you’ve read a few Shakespeare plays; the baseball enthusiasts club assumes knowledge of the standard rules. On LessWrong people expect knowledge of concepts like Conservation of Expected Evidence and Making Beliefs Pay Rent and Adaptation-Executers, not Fitness-Maximizers.

Not all the most commonly referenced ideas come from The Sequences, but enough of them do that we strongly encourage people to read The Sequences. Ways to get started

Much of the spirit of LessWrong can also be gleaned from Harry Potter and the Methods of Rationality (a fanfic by the same author as The Sequences). Many people found their way to LessWrong via reading it.

Don’t worry! You don’t have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is.

Topics other than Rationality

The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion. − 12 Virtues of Rationality

We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/​worldview.

Artificial Intelligence

If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.

  • AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.

  • Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.

  • Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/​AI Safety, and that’s only accelerated recently with further AI capabilities developments.

    • LessWrong is also integrated with the Alignment Forum

    • The LessWrong team, who maintain and develop the site, are predominantly motivated by trying to cause powerful AI outcomes to be good.

Even if you found your way to LessWrong because of your interest in AI, it’s important for you to be aware of the site’s focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.

How to get started

Because LessWrong is a pretty unusual place, it’s usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you’ll write something well received.

Here’s the reading we recommend:

Foundational reading

LessWrong grew from the people who read Eliezer Yudkowsky’s writing on a shared blog overcomingbias.com and then migrated to a newfound community blog in 2009. To better understand the culture and shared assumptions on LessWrong, read The Sequences.

The full Sequences is pretty long, so we also have The Sequences Highlights for an initial taste. The Codex, a collection of writing by Scott Alexander (author of Slate Star Codex/​Astral Codex Ten) is also a good place to start, as is Harry Potter and the Methods of Rationality.

Exploring your interests

The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong

Participate in welcome threads

The monthly general Open and Welcome thread is a good place to introduce yourself and ask questions, e.g. requesting reading recommendations or floating your post ideas. There are frequently new “all questions welcome” AI Open Threads if that’s what you’d like to discuss.

Attend a local meetup

There are local LessWrong (and SSC/​ACX) meetups in cities around the world. Find one (or register for notifications) on our event page.

Helpful Tips

If you have questions about the site, here are few places you can get answers:

How to ensure your first post or comment is well-received

This is a hard section to write. The new users who need to read it least are more likely to spend time worrying about the below, and those who need it most are likely to ignore it. Don’t stress too hard. If you submit it and we don’t like it, we’ll give you some feedback.

A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.

There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/​comment from new users for the following. If the first submission is lacking, it might be rejected and you’ll get feedback on why.

Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if you:

Demonstrate understanding of LessWrong rationality fundamentals. Or at least don’t do anything contravened by them. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc. See the Foundational Reading section above.

Write a clear introduction. If your first submission is lengthy, i.e. a long post, it’s more likely to get quickly approved if the site moderators can quickly understand what you’re trying to say rather than having to delve deep into your post to figure it out. Once you’re established on the site and people know that you have good things to say, you can pull off having a “literary” opening that doesn’t start with the main point.

Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it’s clear you’re aware of prior relevant discussion and are building upon on it. It’s not a big deal if you weren’t aware, there’s just a chance the moderator team will reject your submission and point you to relevant material.

This doesn’t mean that you can’t question positions commonly held on LessWrong, just that it’s a lot more productive for everyone involved if you’re able to respond to or build upon the existing arguments, e.g. showing why they’re wrong.

Address the LessWrong audience. A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There’s nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particularly interesting or insightful, nor demonstrate an interest in LessWrong’s culture/​norms or audience (as revealed by a very different style and not really responding to anyone on site).

It’s good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts are good).

Aim for a high standard if you’re contributing on the topic AI. As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don’t think your AI-related contribution is particularly valuable and it’s not clear you’ve tried to understand the site’s culture or values, then it’s possible we’ll reject it.

Don’t worry about it too hard.

It’s ok if we don’t like your first submission, we will give you feedback. In many ways, the bar isn’t that high. As I wrote above, this document is so not being approved on your first submission doesn’t come as a surprise. If you’re writing a comment and not a 5,000 word post, don’t stress about it.

If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one or AI one. That’s a good place to say “I have an idea about X, does LessWrong have anything on that already?”

In conclusion, welcome!

And that’s it, hopefully this intro sets you up for good reading and good engagement with LessWrong!

Appendices

The Voting System

The voting or “karma” system is pretty integral to how LessWrong promotes (or hides) content. The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less.

Strong Votes and Vote Strength

LessWrong has strong votes too, for when you feel something particularly strongly. Different users have different vote strengths based on how many upvotes/​downvotes they’ve received.

Two-Axis System

It’s possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it’s for a conclusion you agree with. LessWrong makes it possible to express to see more/​less of something separately from whether you agree/​disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too.

LessWrong moderator’s toolkit

The LessWrong mod team like to be transparent about our moderation process. We take tending the garden seriously, and are continuously improving our tools for maintaining a well-kept site. Here are some of our tools and processes.

Initial user/​content review

  • We review every first post and comment before it goes live to ensure it’s up to par (see section above on ensuring your first comment gets approved).

    • it’s okay if your first submission or several don’t meet the bar, we’ll give you feedback on what to change if something’s not good

  • If we don’t like your submission, we mark it as rejected and it will be displayed (without your username) on the Rejected Content page. That page that exists so people can double-check our decisions.

  • After approving a user’s first post or submission, we tend to keep an eye on their next few submissions before giving a more “full approval”.

  • Users who have negative karma (vote points) or have several downvoted submissions in a row automatically get flagged for re-review.

Moderator actions

When there’s stuff that seems to make the site worse, in order of severity, we’ll apply the following:

  • Warnings

  • Rate limits (of varying strictness)

    • we will soon be experimenting with automatic rate limits: users with very low or negative karma will be automatically restricted in how frequently they can post and comment. For example, someone who’s quickly posted several negative-karma posts will need to wait before being allowed to post the next one.

  • Temporary bans

  • Full bans

Rules to be aware of

  • Don’t make sock puppet accounts to upvote your other accounts

  • Don’t make new accounts to evade moderator action

  • Don’t “vote brigade”, that is, don’t solicit extra upvotes on your content

  1. ^

    I won’t claim that we’re entirely unique, but I don’t think our site is typical of the internet.

    Some people pointed out to me that other Internet communities also aim more in the direction of collaborative and truth-seeking discourse such as Reddit’s ELI5 or Change My View; adjacent communities like Astral Codex Ten; and discourse in technical communities like engineers or academics; etc.

  2. ^

    We say “human” rationality, because we’re most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that’d apply to AIs and aliens too).

  3. ^

    The definition of “rationality” on LessWrong isn’t 100% universally agreed to, though this one is the most standard.

  4. ^

    This is ideally what we’d call ourselves all the time, but since it’s a bit of a mouthful, people tend to just say rationalist without qualification. Nonetheless, we do not claim that we’ve definitely attained that much rationality. But we’re aiming to.

  5. ^

    In fact, one of Eliezer Yudkowsky’s (founder of LessWrong) ulterior motives for founding LessWrong in 2009 was that rationality would help people think about AI. Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future.

  6. ^

    As opposed to beliefs being for signaling group affiliation and having pleasant feelings.

  7. ^

    In a 2014 comment, Eliezer described the Sequences as containing 60% standard positions, 25% ideas you could find elsewhere with some hard looking, and 15% original ideas. He says that the non-boring tone might have fooled people into thinking more is in original than there is, but also that the curation of which things he included and how they fit together into a single package was also originality.