New User’s Guide to LessWrong
The road to wisdom? Well, it’s plain and simple to express: Err and err and err again but less and less and less. – Piet Hein |
Why a new user guide?
Although encouraged, you don’t have to read this to get started on LessWrong!
LessWrong is a pretty particular place. We strive to maintain a culture that’s uncommon for web forums[1] and to stay true to our values. Recently, many more people have been finding their way here, so I (lead admin and moderator) put together this intro to what we’re about.
My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn’t the place for you, this guide will help you have a good “visit” or simply seek other pastures.
Contents of this page/email
If you arrived here out of interest in AI, make sure to read the section on LessWrong and Artificial Intelligence.
What LessWrong is about: “Rationality”
LessWrong is online forum/community that was founded with the purpose of perfecting the art of human[2] rationality.
While truthfulness is a property of beliefs, rationality is a property of reasoning processes. Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process. For example, a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what’s convenient to believe. An aspiring rationalist[4] is someone who aspires to improve their own reasoning process to arrive at truth more often.
...a rationalist isn’t just somebody who respects the Truth...All too many people respect the Truth. A rationalist is somebody who respects the processes of finding truth. – Rationality: Appreciating Cognitive Algorithms
[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because because they’re sitting atop a pile of utility. – Rationality is systematized winning
The Art [of rationality] must have a purpose other than itself, or it collapses into infinite recursion. - the 11th virtue of rationality
On LessWrong we attempt (though don’t always succeed) to apply the rationality lessons we’ve accumulated to any topic that interests us, and especially topics that seem important, like how to make the world a better place. We don’t just care about truth in the abstract, but care about having true beliefs about things we care about so that we can make better and more successful decisions.
Right now, AI seems like one of the most (or the most) important topics for humanity. It involves many tricky questions, high stakes, and uncertainty in an unprecedented situation. On LessWrong, many users are attempting to apply their best thinking to ensure that the advent of increasingly powerful AI goes well for humanity.[5]
Is LessWrong for you?
LessWrong is a good place for someone who:
values curiosity, learning, self-improvement, figuring out what’s actually true (rather than just what you want to be true or just winning arguments)
will change their mind or admit they’re wrong in response to compelling evidence or argument
wants to work collaboratively with others to figure out what’s true
likes acknowledging and quantifying uncertainty and applying lessons from probability, statistics, and decision theory to your reasoning
is nerdy and interested in all questions of how the world works and who is not afraid to reach weird conclusions if the arguments seem valid
likes to be pedantic and precise, and likes to bet on their beliefs
doesn’t mind reading a lot
If many of these apply to you, then LessWrong might be the place for you.
LessWrong has been getting more attention (e.g. we get linked in major news articles somewhat regularly these days), and so have many more people showing up on the site. We, the site moderators, don’t take for granted that what makes our community special won’t stay that way without intentional effort, so we are putting more effort into tending to our well-kept garden.
If you’re on board with our program and will help make our community more successful at its goals, then welcome!
Okay, what are some examples of what makes LessWrong different?
I just had a crazy experience. I think I saw someone on the internet have a productive conversation.
I was browsing this website (lesswrong.com, from the guy who wrote that Harry Potter fanfiction I’ve been into), and two people were arguing back and forth about economics, and after like 6 back and forths one of them just said “Ok, you’ve convinced me, I’ve changed my mind”.
Has this ever happened on the internet before?
– paraphrased and translated chatlog (from german) by Habryka to a friend of his, circa 2013-2014
The LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinct style from the rest of Internet.
Some of the features that set LessWrong apart:
We applaud you for saying “oops”
We treat beliefs as being about shaping your anticipations of what you’ll observe[6]
The goal of our conversations is to figure out what’s true, not to win arguments
we try to focus on what would change our minds
it’s common for us to acknowledge when someone we are debating has made a convincing point, or has even outright convinced us entirely
We are very Bayesian
Rather than treating belief as binary, we use probabilistic credences to express our certainty/uncertainty. Rather than say that is “extremely unlikely”, we’d say “I think there’s a 1% chance or lower of it happening”.
We are interested in Bayesian evidence for or against a hypothesis, i.e. observations we are more or less likely to make if the hypothesis is true vs not.
We avoid certain behaviors that seem to make conversation worse on the rest of the Internet.
And strive for other things that make conversations better.
these are not official LessWrong site guidelines, but suggestive of the culture around here: Basics of Rationalist Discourse and Elements of Rationalist Discourse
Philosophical Heritage: The Sequences
“I re-read the Sequences”, they tell me, “and everything in them seems so obvious. But I have this intense memory of considering them revelatory at the time.”
This is my memory as well. They look like extremely well-written, cleverly presented version of Philosophy 101. And yet I distinctly remember reading them after I had gotten a bachelor’s degree magna cum laude in Philosophy and being shocked and excited by them. – Scott Alexander in Five Years and One Week of Less Wrong
Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality[7]; collectively those blog posts are called The Sequences. In 2009, Eliezer founded LessWrong as a community forum for the people who’d liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated.
If you go to a math conference, people will assume familiarity with calculus; the literature club likely expects you’ve read a few Shakespeare plays; the baseball enthusiasts club assumes knowledge of the standard rules. On LessWrong people expect knowledge of concepts like Conservation of Expected Evidence and Making Beliefs Pay Rent and Adaptation-Executers, not Fitness-Maximizers.
Not all the most commonly referenced ideas come from The Sequences, but enough of them do that we strongly encourage people to read The Sequences. Ways to get started
The original sequences were ~700 blog posts.
Rationality: A-Z was an edited and distilled version compiled in 2015 of ~400 posts.
Highlights from the Sequences is 50 top posts from the Sequences. They’re a good place to start.
Much of the spirit of LessWrong can also be gleaned from Harry Potter and the Methods of Rationality (a fanfic by the same author as The Sequences). Many people found their way to LessWrong via reading it.
Don’t worry! You don’t have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is.
Topics other than Rationality
The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinge upon rationality: evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion. − 12 Virtues of Rationality
We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.
Artificial Intelligence
If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
LessWrong is also integrated with the Alignment Forum
The LessWrong team, who maintain and develop the site, are predominantly motivated by trying to cause powerful AI outcomes to be good.
Even if you found your way to LessWrong because of your interest in AI, it’s important for you to be aware of the site’s focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.
How to get started
Because LessWrong is a pretty unusual place, it’s usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you’ll write something well received.
Here’s the reading we recommend:
Foundational reading
LessWrong grew from the people who read Eliezer Yudkowsky’s writing on a shared blog overcomingbias.com and then migrated to a newfound community blog in 2009. To better understand the culture and shared assumptions on LessWrong, read The Sequences.
The full Sequences is pretty long, so we also have The Sequences Highlights for an initial taste. The Codex, a collection of writing by Scott Alexander (author of Slate Star Codex/Astral Codex Ten) is also a good place to start, as is Harry Potter and the Methods of Rationality.
Exploring your interests
The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong
Participate in welcome threads
The monthly general Open and Welcome thread is a good place to introduce yourself and ask questions, e.g. requesting reading recommendations or floating your post ideas. There are frequently new “all questions welcome” AI Open Threads if that’s what you’d like to discuss.
Attend a local meetup
There are local LessWrong (and SSC/ACX) meetups in cities around the world. Find one (or register for notifications) on our event page.
Helpful Tips
If you have questions about the site, here are few places you can get answers:
Asking in the monthly Open Thread
Look in the FAQ
(unfortunately getting a bit out of date)
How to ensure your first post or comment is well-received
This is a hard section to write. The new users who need to read it least are more likely to spend time worrying about the below, and those who need it most are likely to ignore it. Don’t stress too hard. If you submit it and we don’t like it, we’ll give you some feedback.
A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.
There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking, it might be rejected and you’ll get feedback on why.
Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if you:
Demonstrate understanding of LessWrong rationality fundamentals. Or at least don’t do anything contravened by them. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc. See the Foundational Reading section above.
Write a clear introduction. If your first submission is lengthy, i.e. a long post, it’s more likely to get quickly approved if the site moderators can quickly understand what you’re trying to say rather than having to delve deep into your post to figure it out. Once you’re established on the site and people know that you have good things to say, you can pull off having a “literary” opening that doesn’t start with the main point.
Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it’s clear you’re aware of prior relevant discussion and are building upon on it. It’s not a big deal if you weren’t aware, there’s just a chance the moderator team will reject your submission and point you to relevant material.
This doesn’t mean that you can’t question positions commonly held on LessWrong, just that it’s a lot more productive for everyone involved if you’re able to respond to or build upon the existing arguments, e.g. showing why they’re wrong.
Address the LessWrong audience. A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There’s nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particularly interesting or insightful, nor demonstrate an interest in LessWrong’s culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).
It’s good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts are good).
Aim for a high standard if you’re contributing on the topic AI. As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don’t think your AI-related contribution is particularly valuable and it’s not clear you’ve tried to understand the site’s culture or values, then it’s possible we’ll reject it.
Don’t worry about it too hard.
It’s ok if we don’t like your first submission, we will give you feedback. In many ways, the bar isn’t that high. As I wrote above, this document is so not being approved on your first submission doesn’t come as a surprise. If you’re writing a comment and not a 5,000 word post, don’t stress about it.
If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one or AI one. That’s a good place to say “I have an idea about X, does LessWrong have anything on that already?”
In conclusion, welcome!
And that’s it, hopefully this intro sets you up for good reading and good engagement with LessWrong!
Appendices
The Voting System
The voting or “karma” system is pretty integral to how LessWrong promotes (or hides) content. The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less.
Strong Votes and Vote Strength
LessWrong has strong votes too, for when you feel something particularly strongly. Different users have different vote strengths based on how many upvotes/downvotes they’ve received.
Two-Axis System
It’s possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it’s for a conclusion you agree with. LessWrong makes it possible to express to see more/less of something separately from whether you agree/disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too.
LessWrong moderator’s toolkit
The LessWrong mod team like to be transparent about our moderation process. We take tending the garden seriously, and are continuously improving our tools for maintaining a well-kept site. Here are some of our tools and processes.
Initial user/content review
We review every first post and comment before it goes live to ensure it’s up to par (see section above on ensuring your first comment gets approved).
it’s okay if your first submission or several don’t meet the bar, we’ll give you feedback on what to change if something’s not good
If we don’t like your submission, we mark it as rejected and it will be displayed (without your username) on the Rejected Content page. That page that exists so people can double-check our decisions.
After approving a user’s first post or submission, we tend to keep an eye on their next few submissions before giving a more “full approval”.
Users who have negative karma (vote points) or have several downvoted submissions in a row automatically get flagged for re-review.
Moderator actions
When there’s stuff that seems to make the site worse, in order of severity, we’ll apply the following:
Warnings
Rate limits (of varying strictness)
we will soon be experimenting with automatic rate limits: users with very low or negative karma will be automatically restricted in how frequently they can post and comment. For example, someone who’s quickly posted several negative-karma posts will need to wait before being allowed to post the next one.
Temporary bans
Full bans
Rules to be aware of
Don’t make sock puppet accounts to upvote your other accounts
Don’t make new accounts to evade moderator action
Don’t “vote brigade”, that is, don’t solicit extra upvotes on your content
- ^
I won’t claim that we’re entirely unique, but I don’t think our site is typical of the internet.
Some people pointed out to me that other Internet communities also aim more in the direction of collaborative and truth-seeking discourse such as Reddit’s ELI5 or Change My View; adjacent communities like Astral Codex Ten; and discourse in technical communities like engineers or academics; etc.
- ^
We say “human” rationality, because we’re most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that’d apply to AIs and aliens too).
- ^
The definition of “rationality” on LessWrong isn’t 100% universally agreed to, though this one is the most standard.
- ^
This is ideally what we’d call ourselves all the time, but since it’s a bit of a mouthful, people tend to just say rationalist without qualification. Nonetheless, we do not claim that we’ve definitely attained that much rationality. But we’re aiming to.
- ^
In fact, one of Eliezer Yudkowsky’s (founder of LessWrong) ulterior motives for founding LessWrong in 2009 was that rationality would help people think about AI. Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future.
- ^
As opposed to beliefs being for signaling group affiliation and having pleasant feelings.
- ^
In a 2014 comment, Eliezer described the Sequences as containing 60% standard positions, 25% ideas you could find elsewhere with some hard looking, and 15% original ideas. He says that the non-boring tone might have fooled people into thinking more is in original than there is, but also that the curation of which things he included and how they fit together into a single package was also originality.
- Welcome to LessWrong! by 14 Jun 2019 19:42 UTC; 429 points) (
- Sam Altman’s sister, Annie Altman, claims Sam has severely abused her by 7 Oct 2023 21:06 UTC; 98 points) (
- Automatic Rate Limiting on LessWrong by 23 Jun 2023 20:19 UTC; 77 points) (
- What’s with all the bans recently? by 4 Apr 2024 6:16 UTC; 64 points) (
- How ForumMagnum builds communities of inquiry by 4 Sep 2023 16:52 UTC; 33 points) (
- 18 May 2023 18:28 UTC; 5 points) 's comment on LessWrong moderation messaging container by (
- 20 Dec 2023 3:41 UTC; 3 points) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (
- Issue with AI alignment—diversity of opinions as competetive advantage? (as opposed to echo chambers) by 30 Aug 2023 9:44 UTC; 1 point) (EA Forum;
Each of the following bullet points begins with “who”, so this should probably be something like “LessWrong is a good place for people:”
Or “good place for those”.
This is much much better than the draft version. In particular, I no longer have the same impression from my draft feedback, that it read like “Here’s how you can audition for a spot in our prestigious club”.
So kudos for listening to feedback <3, and apologies for my exhausting style of ultra-detailed feedback.
Anyway, you made the mistake (?) of asking for more feedback, so I have more of it T_T. I’ve split it into three separate comments: typos, language, and substantial feedback.
Substantial feedback (incl. disagreements)
Excessive demands on first contributions by new users
“Don’t worry! You don’t have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is.” → I’m confused who this kind of phrasing is addressed at, and wonder whether the current version would have the desired effect. After all, “Don’t worry” often means “Do worry”.
“Even if you found your way to LessWrong because of your interest in AI, it’s important for you to be aware of the site’s focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.” → Once again I’m skeptical about these vague end-of-section paragraphs.
“How to ensure your first post or comment is well-received” → Once again I don’t like any sections which imply that you have to write a Bachelor’s Thesis before you can begin participating on the site.
I would reconsider the motivation for that section and cut it entirely, or substantially rewrite and shorten it, or just spin it off into a separate post.
Who is this section even written for? “A lot of the below is written for the people who aren’t putting in much effort at all, so we can at least say “hey, we did give you a heads up in multiple places”.” → That seems like a bad reason for something to be part of the New User’s Guide. Brevity is a virtue; here you’re displaying text to people in the expectation that those who should read it won’t, and those who don’t need to read it will.
Another indication for why this section seems dubious to me is that it once again ends on something like “Don’t worry about it too hard.”. If you don’t want new users to worry about something too hard, don’t put it into the New User’s Guide in the first place.
Re: the section “Initial user/content review” → See my comments on “How to ensure your first post or comment is well-received”.
Excessive reading material for new users
The “How to get started” section begins with “Because LessWrong is a pretty unusual place, it’s usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you’ll write something well received.” → This section is drowning new users in potential reading material, and I’m skeptical of that approach.
Also, part of the advice in “How to get started” boils down to “read the Sequences”, which is a ridiculously huge ask. That’s not “how to get started”, at best that’s “how to get much more involved”. (As an example, IIRC I read the Sequences back in 2013 as a university student, and reading them took me two full months during summer vacation.)
Suggestions for how to welcome new users instead
“Participate in welcome threads” → This kind of suggestion should be at the top of the “How to get started” section, not (to paraphrase) “read several million words”. That said, I don’t know to which extent questions in those threads are currently answered. But since the mods already take the time to review any comments by new users, I think responding to questions in these threads would be a comparatively good use of mod time, to the point that you could even make it an unspoken rule that in Welcome threads, all requests for further reading will be answered.
“The monthly general Open and Welcome thread … “all questions welcome” AI Open Threads” → These tag pages are currently sorted by Most Relevant for me, which is to say, Not Relevant At All. If the site infrastructure allows this, I’d suggest setting these two tags to default to sort by New whenever they’re linked without a preferred sorting method. If not, I suggest replacing all links to the open threads such that the sorting is part of the link. Like this: Open Threads (sorted by New) and AI Open Threads (sorted by New).
On the FAQ
The “Helpful Tips” section mentions that the FAQ is outdated. If the FAQ is outdated, either don’t link to it, or maybe actually update it?
Alternatively, consider turning the FAQ from a post into a tag page; then other LW power users could update it rather than just the LW team. (This seems like a good rule of thumb for all “living documents” related to LW, i.e. ones which are meant to be kept up-to-date; blogposts aren’t really the right format for documents which are meant to be continuously edited, whereas the tag pages are. Also, what if the FAQ essay is replaced by a new one in the future? Then you’d have to update all links to the old FAQ.)
In fact, I suspect that if you turned the FAQ into any format which the community can continuously edit, and then wrote a post à la “Request: Help Us Update our FAQ”, then I expect that this problem might just “solve itself”.
On the Length
Shorter is better. Approximately all LW posts, including my comments here, are way too long.
Here are some parts I think could be cut or spun off:
Footnote 2 (on why human rationality) seems superfluous. I don’t think this footnote pulls its weight in this intro.
The section “How to ensure your first post or comment is well-received”. See my “Substantial feedback” section for why I don’t like it.
Could the CFAR handbook or Tuning your cognitive strategies be put in the foundational reading section, alongside the Sequences and Codex and HPMOR?
Cognitive tuning isn’t very foundational, and possibly not even safe (although people worried about the safety seem to be mistaken). But if enough people try it, then it has significant potential to become its own entire field of successful human intelligence augmentation. AFAIK it offers a more nuanced approach to intelligence-augmenting habit formation than anything I’ve seen from any other source.
The CFAR handbook is good stuff that gets at important aspects of rationality, but I don’t think it counts either as something that core LessWrong userbase has mostly read, or is nearly as much the stuff that gets used regularly in conversations here. Among other things, the PDF of it wasn’t generally available until 2020, and a nicely formatted sequence until a year ago.
A bunch of the intro feels quite molochpilled to me. eg “stay true to our values” and the entire “systemetized winning” that we still seem to bring up here (concerning in the sense of implying conflict games). Since the negative interpretations aren’t the intended ones, I suspect that we’re a low edit distance from avoiding the implication. Unfortunately, it’s late and I post this without any fixes in mind; just thought I’d express the viewpoint.
Sorry to have missed this while it was in draft form!
Can you clarify the molochy-ness?
short answer: apparently I’m not sure how to clarify it.
Before this change, which I feel fixes the main issue I was worried about:
it sounded to a large subset of my predictor of how my friends would react if I shared this to invite them to participate here, that I should predict that they would read it as “win at the zero sum game of life”. this still has some ambiguity in that direction; by not clearly implying that life isn’t zero sum, an implication that a certain kind of friend is worried anyone who thinks themselves smarter or more rational than others is likely to also believe, that sort of easily spooked friend will be turned away by this phrasing. I don’t say this to claim this friend is correct; I say this because I want to invite more of this sort of friend to participate here. I also recognize that accommodating the large number of easily spooked humans out there can be a chore, which is why I phrase the criticism by describing in detail how the critique is based on a prediction of those who won’t comment about it. Those who do believe life is zero sum, and those who spend their day angry at the previous group who believe life is zero sum, should, in my opinion, both be able to read this and get excited that this rational viewpoint has a shot at improving on their own viewpoint; the conflict between these types of friend should be visibly “third door”ed here. To do this needs a subtlety that I write out this long meta paragraph because I am actually not really sure how to manage; a subtlety that I am failing to encode. So I just want to write out a more detailed overview of my meta take and let it sit here. Perhaps this is because the post is already at the pareto frontier of what my level of intelligence and rationality can achieve, and this feedback is therefore nearly useless!
In other words: nothing actually specifically endorses moloch. But there’s a specific kind of vibe that is common around here, which I think a good intro should help onramp people into understanding, and which presently is an easier vibe to get started with for the type of friend who believes life is zero sum and would like to win against others.
Btw, I unvoted my starting comment, based on a hunch about how I’d like comments to be ordered here.
The question of whether truth-seeking (epistemic) rationality is actually the same.as.winning (instrumental) rationality has never been settled. In the interests of epistemic rationality, it might have been better to phrase this as “we are interested in seeking both truth and usefulness”.
Some of that changed from the last draft. I just made a change to clarify in the case of “winning” since that seemed easy.
Feedback on language, style, and phrasing
The table of contents at the top is currently not synced with the actual headings, and is missing most of the subheadings.
“My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn’t the place for you, this guide will help you have a good “visit” or simply seek other pastures.” → Is the second sentence really necessary?
“We strive to maintain a culture that’s uncommon for web forums[1] and to stay true to our values.” → The “stay true to our values” part of the sentence seems rather empty because the values aren’t actually listed until a later section. How about “We strive to main a culture and values which are uncommon for web forums” or some such?
Re: “Our definition of rationality” in the section ‘What LessWrong is about: “Rationality”’: Instead of the current footnote, I’d prefer to see a brief disambiguation on what similar-sounding concepts LW-style rationality is not equivalent to, namely philosophical rationalism. And even most of the criticisms on the Wikipedia page on rationality don’t refer to the LW concept of rationality, but something different and much older.
“If you’re on board with our program and will help make our community more successful at its goals, then welcome!” → I know what you’re going for here, but this currently sounds like “if you’re not with us, you’re against us”, even though a hypothetical entirely passive lurker (who doesn’t interact with the site at all) would be completely fine. In any case, I think this section warrants a much weaker-sounding conclusion. After all, aren’t we fine with anyone who (to keep the metaphor) doesn’t burn or trash the garden?
“We treat beliefs as being about shaping your anticipations of what you’ll observe[6]” → I currently don’t understand the point of this sentence. Maybe something like “We consider the purpose of beliefs that they shape your anticipations of what you’ll observe[6]”? That still sounds weird. I’m genuinely not sure, and thus in any case recommend rewriting this sentence.
“LessWrong is also integrated with the Alignment Forum” → If you’re going to mention the Aligment Forum, then I suggest also explaining what it is in one short sentence.
A significant chunk of the section “Foundational reading” is a redundant repetition of the section “Philosophical Heritage: The Sequences”.
Throughout the essay, there are several instances of writing of the form “A/B/C”, and in all cases they would read better as an actual sentence with commas etc.
“The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less.” → Isn’t the actual advice to upvote if you want yourself and others to see more of something? Or phrased differently, “Upvote if you want LW to feature more of X”.
“Different users have different vote strengths based on how many upvotes/downvotes they’ve received.” → This phrasing seems needlessly roundabout. Long-term community members with higher karma have stronger votes, that’s it.
“we will soon be experimenting with automatic rate limits: users with very low or negative karma will be automatically restricted in how frequently they can post and comment. For example, someone who’s quickly posted several negative-karma posts will need to wait before being allowed to post the next one.” → This entire paragraph is no longer up-to-date.
Nitpicky language feedback
“Why a new user guide?” (first heading) → This might be clearer as “Why a guide for new users?”
“Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process.” → I know what you’re going for here, but as written this sounds like you’re presupposing your conclusion.
“If many of these apply to you, then LessWrong might be the place for you.” → “might be a good place for you”
Pretty much all bullet points after “Some of the features that set LessWrong apart:” look like full sentences and should therefore end on a period.
“Rather than treating belief as binary, we use probabilistic credences to express our certainty/uncertainty.” → would be shorter as “express our (un)certainty”
“examples here” → You can find some examples here.”
“Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts” → That sounds like a confusing contradiction, unless it’s a puzzle whose gotcha answer is “In 2007 and 2008″. Were the sequence written in 2 years or in 3-4 years?
“blog posts that shared his philosophy/beliefs/models about rationality” → philosophy, beliefs, and models”
“The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong” → This reads a bit weirdly and could be rephrased.
The “Helpful Tips” section is unpolished, with inconsistent phrasing etc.
“Two-Axis System” → “The Two-Axis Voting System”
“It’s possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it’s for a conclusion you agree with. LessWrong makes it possible to express to see more/less of something separately from whether you agree/disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too.” → Suggested phrasing: “Sometimes you might want to see more of something (like interesting arguments), even if you disagree with it, or to think an argument is weak even though it’s for a conclusion you agree with. On LessWrong you can express your desire to see more (or less) of something separately from whether you (dis)agree with it. (Currently only comments.) So with this voting system, you can express judgments of quality separate from agreement.”
“That page that exists so people can double-check our decisions.” → “That page exists so users can hold the LW mods accountable for their moderation decisions.”
“If we don’t like your submission, we mark it as rejected” → Weird phrasing. How about: “If we reject your submission as not being a good fit for LW”
“When there’s stuff that seems to make the site worse, in order of severity, we’ll apply the following:” → “stuff” seems too vague.
Sections with weird phrasing
“As I wrote above, this document is so not being approved on your first submission doesn’t come as a surprise.” → Weird phrasing.
“hopefully this intro sets you up for good reading and good engagement with LessWrong!” → Weird phrasing.
“The LessWrong mod team like to be transparent about our moderation process.” → Weird phrasing.
“Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future.” → Weird phrasing.
Good opportunity to say “showing why they’re wrong” instead (without “you think”), to avoid connotation of “it’s just your opinion” rather than possibility of actually correct bug reports.
Edited!
It’s not clear from this or what immediately follows in this section whether you intend this statement as a tautological definition of a process (a process that “tends to arrive at true beliefs and good decisions more often” is what we call a “more rational reasoning process”) or as an empirically verifiable prediction about a yet-to-be-defined process (if you use a TBD “more rational reasoning process” then you will “tend[] to arrive at true beliefs and good decisions more often”). I could see people drawing either conclusion from what’s said in this section.
Good point. I’ve edited to make this clearer.
Since you’ve gone with the definition, are you sure that definition is solid? A reasoning process like “spend your waking moments deriving mathematical truths using rigorous methods; leave all practical matters to curated recipes and outside experts” may tend to arrive at true beliefs and good decisions more often than “attempt to wrestle as rationally as you can with all of the strange and uncertain reality you encounter, and learn to navigate toward worthy goals by pushing the limits of your competence in ways that seem most promising and prudent” but the latter seems to me a “more rational reasoning process.”
The conflation of rationality with utility-accumulation/winning also strikes me as questionable. These seem to me to be different things that sometimes cooperate but that might also be expected to go their separate ways on occasion. (This, unless you define winning/utility in terms of alignment with what is true, but a phrase like “sitting atop a pile of utility” doesn’t suggest that to me.)
If you thought you were a shoe-in to win the lottery, and in fact you do win, does that retrospectively convert your decision to buy a lottery ticket into a rational one in addition to being a fortunate one? (Your belief turned out to be true, your decision turned out to be good, you got a pile of utility and can call yourself a winner.)
A thing I should likely include is something like the definition gets disputed, but what I present is the most standard one.
Thanks to everyone who posted feedback on the draft of this.
Typo feedback:
“out of interest”
“is an online forum and community”
“more likely to lead to true beliefs” (a reasoning process doesn’t believe anything)
a) The original article is capitalized as “Rationality is Systematized Winning”
b) After this line in the essay, there’s an empty line inside the quote which can be removed.
For consistency, the dash here should be an em-dash: –
In all the following list of bullet points, the grammar doesn’t work.
a) Currently they read as “LessWrong is a good place for who wants to work collaboratively” etc., so obviously a word like “someone” or “people” is missing. And the entire structure might work better if it was instead phrased as “LessWrong is a good place for people who...” or “LessWrong is a good place for you if you”, with each bullet point beginning with ”… <verb>”.
b) The sentences also currently mix up two ways of address, namely “someone who” and “you”. E.g. look at this sentence: “who likes acknowledging… to your reasoning”
I’m not entirely sure, but I think the “won’t” here might be a wrong negation. How about something like the following:
“We, the site moderators, don’t take for granted what makes our community special, and that preserving it will require intentional effort.”
“German”
“of the Internet”
“Rather than say that X is… that X happens.”
“conversations”
“These”
“wanted to have discussions”
“he’d described”
“started:”
Also, some of the bullet points immediately after this are in past tense for some reason.
“consisting of ~400 posts”
“consists of 50 top posts”
heads-up
“Forum.”
“well-received”
“are pretty long”
“and see what the style is on LessWrong.”
“here are a few places where”
I find the current phrasing a bit weird. Maybe “because we host discussions of it”?
″, even if you disagree with it”
All other bullet points here are phrased as full sentences with a period at the end.
All bullet points following this are missing periods at the end.
“because because” should probably be “because”
“won’t stay that way” should probably be “would stay that way”
I don’t know how to phrase the question but, basically, “what does that mean”?
Assume a new user comes to LW, reads the New User’s Guide to LessWrong first, then starts browsing the latest posts/recommandations, they will quickly notice that, in practice, LW is mostly about AI or, at least, most posts are about AI, and this has been the case for a while already.
And that is despite the positive karma bias towards Rationality and World modeling by default, which I assume is an effort from you (the LW team) to make LW about rationality, and not about AI (I appreciate the effort).
So, the sentence “What LW is about: “Rationality” ”, is it meant to describe the website, in which case it seems like a fairly inaccurate description ; is it meant to be a promise made to new users, that is “we know that, right now, discussions are focused on AI, but we, the LW team, know that they will come back to rationality / are commited to make them come back to rationality”?
I don’t want to criticize the actions of the LW team, I understand that your are aware of this situation, and that there might not exist a better equilibrium between wanting LW to be about rationality, not wanting to shut down AI discussions because they have some value, not wanting to prevent users from posting about anything (including AI) as long as some quality standards are met. Still, I am worried about the gap a new user would observe between the description of LW written here, and what they will find on the site.
A few points.
This might be conflating “what this site is about” with “what is currently discussed”. The way I see it, LW is primarily its humungous and curated archives, and only secondarily or tertiarily its feed. The New User experience includes stuff like the Sequence Highlights, for example. If there’s too much AI content for someone’s taste (there certainly is for mine), then a simple solution is to a) focus on the enduring archives, rather than the ephemeral feed; and b) to further downweight the AI tag (-25 karma is nowhere near enough).
That said, it might be warranted for the LW team to adjust the default tag weights for new users, going forward.
Rationality is closely related to cognition and intelligence, so I don’t think it’s as far or distinct from AI as would be implied by your comment. AI features prominently in the original Sequences, for example.
You registered in 2020. Back then, a new user might have asked whether the site is supposed to be about rationality, or rather about Covid.
Good points
I’m not sure I share your view, I believe that new user care more about active discussions than reading already established content. I may very much be wrong here.
I agree with you
I think there is more posts about AI now than posts about Covid back then, but I see your point. There were indeed a lot of posts about Covid.
Thank you
You may be right regarding what new users care about—usually one registers on a site to comment on a discussion, for example -, but the problem is that from that perspective, LW is definitely about AI, no matter what the New User’s Guide or the mods or the long-term users say. After all, AI-related news is the primary reason behind the increased influx of new users to LW, so those users are presumably here for AI content.
One way in which the guide and mod team try to counteract that impression is by showing new users curated stuff from the archives, but it might also be warranted to further deemphasize the feed.
I’m a new member here and curious about the site’s view on responding to really old threads. My first comment was on a post that turned out to be four years old. It was a post by Wei Dai and appeared at the top of the page today, so I assumed it was new. I found the content to be relevant, but I’d like to know if there is a shared notion of “don’t reply to posts that are more than X amount in the past.”
I love getting comments on old posts! (There would be less reason to write if all writing were doomed to be ephemera; the reverse-chronological format of blogs shouldn’t be a straitjacket or death sentence for ideas.)
Absolutely. I’ve just gotten a 30-day trial for Matt Yglesias’ SlowBoring substack, and figured I’d look through the archives… But then I immediately realized that Substack, just like reddit etc., practically doesn’t care about preserving, curating or resurfacing old content. Gwern has a point here on internet communities prioritizing content on different timescales by design, and in that context, LessWrong’s attempts to preserve old content are extremely rare.
I’m very confident that there is no norm of pushing people away from posting on old threads. I’m generally confident that most people appreciate comments on old posts. However, I think it is also true that comments on old posts are unlikely to be seen, voted on, or responded to.
I agree that if at all there is a counternorm to that, and also with the observation that such comments are often (sadly) ignored.
It’s totally normal to comment on old posts. We deliberate design the forum to make it easier to do and for people to see that you have.
(actually your comment here makes me realize we should probably somehow indicate when there are new comments on the top-of-the-page spotlight post, so people can more easily see and continue the convo)
GreaterWrong shows new comments regardless.
So does LessWrong, but they quickly disappear (because there’s a high volume of comments). GreaterWrong doesn’t have Spotlight Items so the point is a bit moot, but the idea here is that everyone is nudged more to see new comments on the current Spotlight Item on LessWrong.
(i.e. this thing at the top:
)
Ironic typo: the link includes the proceeding space.
He usually descibes himself as a decision theorist if asked for a description of his job.
Some typos:
Seems like some duplicated words here.
Perhaps: “weird ideas like AIs being powerful and dangerous”
The double negative here distorts the meaning of this sentence.
Thanks @David Gross for the many suggestions and fixes! Much appreciated. Clearly should have gotten this more carefully proofread before posting.
All the typo comments are great, but the resolved typos are mixed in with open feedback. Is it possible to hide those or bundle them together, somehow, so they don’t clutter the comments here?
I agree it’s not great, though I don’t have any easy/quick solution for it.
I also frequently make typo comments, and this problem is why I’ve begun neutral-voting my own typo comments, so they start on 0 karma. If others upvote them, the problem is that the upvote is meant to say “thanks for reporting this problem”, but it also means “I think more people should see this”. And once the typo is fixed, the comment is suddenly pointless, but still being promoted to others to see.
Alternatively, I think a site norm would be good where post authors are allowed and encouraged to just delete resolved typo comments and threads. I don’t know, however, if that would also delete the karma points the user has gained via reporting the typos. And it might feel discouraging for the typo reporters, knowing that their contribution is suddenly “erased” as if it had never happened.
A technical alternative would be an archival feature, where you or a post author can mark a comment as archived to indicate that it’s no longer relevant. Once archived, a comment is either moved to some separate comments tab, or auto-collapsed and sorted below all other comments, or something.
The concepts page link in the “Exploring your interests” section seems wrong.
This is grammatically ambiguous. The “encouraged” shows up out of nowhere without much indication of who is doing the encouraging or what they are encouraging. (“Although [something is] encouraged [to someone by someone], you don’t have to read this...”)
Maybe “I encourage you to read this before getting started on LessWrong, but you do not have to!” or “You don’t have to read this before you get started on LessWrong, but I encourage you to do so!”
redundant “who”s in bullets
Thanks! Fixed
I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don’t want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it’s greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?
I’d ask in the Open Thread rather than here. I don’t know of a canonical answer but would be good if someone wrote one.
ok thanks
what exactly do users lose and receive karma for?
Karma is just the sum of votes from other users on your posts, comments and wiki-edit contributions.
Hey, I wonder what’s your policy on linking blog posts? I have some texts that might be interesting to this community, but I don’t really feel like copying everything from HTML here and duplicating the content. At the same time I know that some communities don’t like people promoting their content. What are the best practices here?
Typo: “If you arrived here out of interested in AI” instead of “If you arrived here out of interest in AI”.