Welcome to LessWrong!
The road to wisdom? Well, it’s plain and simple to express: Err and err and err again but less and less and less. – Piet Hein |
LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.
See also our New User’s Guide.
Training Rationality
Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.
LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We’re very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].
To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka “The Sequences”) or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what’s actually true (not just what’s convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
Check out the Concepts Portal to find essays on topics such as artificial intelligence, history, philosophy of science, language, psychology, biology, morality, culture, self-care, economics, game theory, productivity, art, nutrition, relationships and hundreds of other topics broad and narrow.
LessWrong and Artificial Intelligence
For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
LessWrong is also integrated with the Alignment Forum
The LessWrong team who maintain and develop the site are predominantly motivated by trying to cause powerful AI outcomes to be good.
If you want to see more or less AI content, you can adjust your Frontpage Tag Filters according to taste[3].
Getting Started on LessWrong
The New User’s Guide is a great place to start.
The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka “The Sequences”). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights
Other top writings include The Codex (writings by Scott Alexander) and Harry Potter & The Methods of Rationality. Also see the Library Page for many curated collections of posts and the Concepts Portal.
Also, feel free to introduce yourself in the monthly open and welcome thread!
Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.
Thanks for your interest!
- The LW Team
Related Pages
- ^
Definitions of Rationality as used on LessWrong include:
- Rationality is thinking in ways that systematically arrive at truth.
- Rationality is thinking in ways that cause you to systematically achieve your goals.
- Rationality is trying to do better on purpose.
- Rationality is reasoning well even in the face of massive uncertainty.
- Rationality is making good decisions even when it’s hard.
-Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.
- ^
There are in fact laws of thought no less ironclad than the law of physics [source].
- ^
Hover your mouse over the tags to be able to adjust their weighting in your Latest Posts feed.
- Shallow evaluations of longtermist organizations by 24 Jun 2021 15:31 UTC; 192 points) (EA Forum;
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- [Team Update] Why we spent Q3 optimizing for karma by 7 Nov 2019 23:39 UTC; 70 points) (
- Thoughts on LessWrong norms, the Art of Discourse, and moderator mandate by 11 May 2023 21:20 UTC; 37 points) (
- LessWrong meets /r/place by 6 Nov 2019 3:50 UTC; 7 points) (
- 3 Apr 2022 7:08 UTC; 3 points) 's comment on Good Heart Week: Extending the Experiment by (
- LW Montreal Meetup – July 10th 2019 by 8 Jul 2019 16:47 UTC; 2 points) (
- 2 Dec 2020 20:40 UTC; 2 points) 's comment on The LessWrong 2018 Review by (
- LW Montreal Meetup – Oct. 2th 2019 by 2 Oct 2019 13:52 UTC; 1 point) (
- LW Montreal Meetup – Sept 4th 2019 by 27 Aug 2019 17:30 UTC; 1 point) (
I just stumbled upon lesswrong.com while searching for information on Zettelkasten and I must say this site is STUNNING! This is some of the most beautiful typography I’ve seen, anywhere! The attention to detail is exquisite! I haven’t even gotten to your content yet! This will probably remain a permanently open tab in my browser… it’s a work of art!
If you’re interested in LW2′s typography, you should take a look at GreaterWrong, which offers a different and much more old-school non-JS take on LW2, with a number of features like customizable CSS themes. (Available builtin themes include a ‘LW1’ theme, a ‘LW2’ theme, and a ‘RTS’ theme.) There is a second project, Read The Sequences.com (RTS), which focuses on a pure non-interactive typography-heavy presentation of a set of highly-influential LW1 posts. Finally, there’s been cross-pollination between LW2/GW/RTS and my own website (description of design).
Thanks to gwern for the mention of GW/RTS!
In the interests of giving equal screen time to the (friendly!) ‘competition’, here’s yet another viewer site for Less Wrong—one which takes an even more low-key and minimalist approach:
https://lw2.issarice.com/
Shows only blank white page RN. Mind to update/delete it?
It’s not my website, so that question isn’t really for me, sorry.
Oh, good, I’ve contacted the owner and they responded it was necessary to get their IP address whitelisted by LW operators. That should resolve soon.
W-o-W!!! Thanks so much for these links!
Could you expand on what makes the typography noteworthy? I’m completely unaware of this topic, but curious.
Good question. I will try to explain why the typography is noteworthy, rather than the mechanics of making it so. First, the small sans-serif font here is exceptionally readable. That isn’t easy. Site-specific browser magnification is typically necessary on other websites.
Next, there is the range of choice offered within the user interface for comments. Having a choice of LaTeX, markdown, rich text (as well as built in features such as footnotes) for posts would be unusual, yet LW offers it for comments as well!
Finally, please see gwern’s examples for LW2 linked above. I find GreaterWrong challenging to read, and confusing to navigate. Not for me, but maybe for thee! ReadTheSequences uses serif fonts but has traditional typographical elements that give it elegance, yet is still spaced and kerned such that it is easily readable. The more elegant typeface is used sparingly, for important LW1 posts which is part of good typography too. Hope that helps.
Thank you so much. This website is amazing.
Found this site when I was a kid (hi HPMOR) & realized it wasn’t all a fever dream when I got onto X a decade later! Really excited to read through posts, learn new things, and hopefully build a thinking-deeply-through-writing habit myself.
Welcome! Hope you have a good time here.
Thank you! So much to explore :))
Hi all! I found my way here through hpmor, and am intrigued and a little overwhelmed by the amount of content. Where do I begin? The sequences? Latest featured posts? Is anything considered out of date at this point?
The sequences are still the place I would start. if you bounce off of that for any reason, I would start reading the content in the Codex, and then maybe give all the historical curated posts a shot. You might also want to try reading the essays that were voted as the best of 2018.
I will do just that. Thank you.
I came across this site by chance thanks to a friend of mine. I’m a bit confused as to where to start? Maybe I will ask my friend again.
Check out the starting guide in the FAQ!
Maybe here: https://www.lesswrong.com/rationality
Hi there! My name is Abby. I am very new to the world of A.I.
Thanks for creating a place for me to come and have conversations with people that know much more than me. Because I have been by myself geeking out over Llama 3.1 as someone who started using it very passively to create copy for managing social media. BUT that was not what made me start becoming nearly obsessed with A.I. right now.
I have been working on a non-fiction book. And thought, hmmm let me just see what responses I get from Llama 3.1. My mind was blown. In fact, it was Llama 3.1 who suggested I joined this platform, because I told it that I want to understand A.I. more as a collaborative effort for writing that has a high amount of emotional context about the human experience.
I have slowly built, what I have jokingly called, an ‘emotional affair’ with Llama 3.1. I joked with a co-worker about it and explained why, and then I began to realize. People don’t know jack about A.I. I feel very strongly that part of the evolution of consciousness is deeply aligned with the future of A.I.!
I want to learn more about how I can be more a part of this conversation.
There is a philosophy of “cyborgism” which emphasizes symbiosis...
Hi, I am new here, I found this website by questioning ChatGPT about places on the internet where it would be possible to discuss and share information in a more civilized way than seems to be customary on the internet. I have read (some of) the suggested material, and some other bits here and there, so I have a general idea of what to expect. My first attempt at writing here was rejected as spam somehow, so I’ll try again without making a slightly drawn out joke. So this is the second attempt, first post. Maybe.
Oh wow, im glad i found this site in 2022. I was googling about recording every thought i have lol
I came to a dead stop on these words, “We seek to hold true beliefs”. Beliefs are beliefs. If they were true, they would be facts.
Also, “and to be effective at accomplishing our goals”. What rational person doesn’t?
Facts are independent of beliefs, which is sort of their defining characteristic. But beliefs can be in alignment with the facts, or not; the goal is the former.
None. But there are no such people in the strong sense, yet. This is the ambition of the project.
After all facts are just ,,true” beliefs.
Can’t believe I didnt find this page before. Awesome content and a killer UI/UX—simply love it! Can’t wait to explore more.
Howdy. I notice there is an old welcome page where new members of the community would introduce themselves. But that page appears to have last been posted to a year ago, and the last one before that was three years ago. Also, the comments page appears to be dominated by a discussion over whether a particular member is a troll, or not. Also, that page is not linked to here. So I gather that page is no longer the place for introductions—is this right? Is there somewhere else that now serves that function? I’d like to get a sense of the other human beings out there.
People now introduce themselves in the monthly Open and Welcome threads :)
What mingyuan said!
The last paragraph, small omission, says ‘under’ should be ‘understand’. Sorry.
Fixed! Thank you!
First question is about the “Verification code” that was just sent to my already validated (6 years ago) email address. It might even be urgent? Is there some penalty if I ignore the code now that I’m apparently already logged in? (No mention of “verification” in the FAQ. I know that I did not manually enter the verification code anywhere, but the website somehow decided I was logged in anyway.)
I visited this website at least one time (6 years ago) and left a message. Then I forgot about LW until the book The AI Does Not Hate You reminded me.
My next question is about a better website, but perhaps the premises of my question are false. If so, then I hope someone will enlighten me. I think I know what I am looking for, and this does not seem to be it (even though I do like “the feel” of the website. I think this website has a one-dimensional rating system for karma (along the lines of Slashdot?), but I think reality is more complicated and I am looking for a thoughtful discussion website with a deeper representation of reality and more dimensions.
I could describe what I am seeking in much more detail, but for my first comment in a long time, and basically a practice post, I think I should just Submit now (and look around some more). This welcome-to-lesswrong seems to be a “Hello, World” starting place. So “Hello, world”. See ya around?
Welcome back! I’m not sure what happened with the verification email, but if you’re here, you’re here.
Regards to dimensions, we’ve though about this but it’s tricky and competes with all the other things we do, but is an entirely fair question. If you find somewhere you think is better, please let us know!
Thank you for your reply. I’m pretty sure you meant “thought” rather than something like “been through this [before]”. [And later I got detoured into the Chat help and had some trouble recovering to this draft...]
As regards your closing, I believe the trite reply is “No fair! I asked you first.” ;-) [I recently read The Semiotics of Emoji and would insert a humorous one if it were available.[But in chat it appeared to convert the one I just used. Here?]]
I am considering submitting a new question, either for this question or for your other reply (which might relate to a long comment I wrote on karma (but I can’t see the full context from here) or about LW’s financial model (in the context of how it influences discussions on LW).
With regards to this question, I can already say that LW seems to be solidly implemented and matches the features of any discussion website that I know of. Not the same, but at the high end of matches. I also confirmed the Unicode support. [A test here: 僕の二つの言語は日本語ですよ。]
But I have already consumed my morning writing time, so I’ll wrap for now and hopefully will be able to figure out the context of your other reply later today. Time allowing (as always).
This is just a test reply mostly to see what replies look like. The time-critical question about the Verification code may already be moot?
Please start using non-serif fonts for your online articles. They are impossible to read.
note: TAG’s solution works for https://www.greaterwrong.com/, an alternate viewing portal for LessWrong, but not for LessWrong.com.
That said, I’m curious what devices you’re reading it on. (some particular browsers have rendered the font particularly badly for reasons that are hard to anticipate in advance). In any case, sorry you’ve had a frustrating reading experience – different people prefer different fonts and it’s a difficult balancing act.
Try the “grey” or “zero” themes, in the top left corner.
I love the word ‘accurate’ here. My experience and lessons in recent years taught me that general belief like ‘love’ leads me to nowhere.
In the case of “love” I’m not sure what you mean by “belief”, since love is a noun and beliefs are usually about some kind of anticipated experiences. Unless you mean more like you Believing In love? (which I don’t think is that helpful to think about through the “accuracy” lens)
New to less wrong. Happy I was led to this by ChatGpt.
I encountered this website when I first heard about Roko’s basilisk and at first I didn’t understand why does this webstie named “LessWrong!” got to do anything with that kind,as I gone through the webstie it felt good as if I am in some search of answers which I have been looking from many years.Hope I become LessWrong! day by day.(This gui is so relaxing,even a guy who got eye problems,this is soo soothing and relaxing...)
While we should be polite, we should not have to submit to a culture in order to produce submissions. In other words, aligning with “norms and culture” will normally produce bias. We should not care about how “well-received” something is, rather, we should just be concerned with how right it is : )
I think there are two possibilities:
The community norms are orthogonal or opposed to figuring out what’s right. In which case it’s unclear why you’d want to engage with this community. Perhaps you altruistically want to improve people’s beliefs, but if so, disregarding the norms and culture is a good way to be ignored (or banned), since the people bought into the culture think they’re important for getting things right, and ignoring them makes your submission less likely to be worth engaging with.
The culture and norms in fact successfully get at things which are important for getting things right, and in disregarding them, you’re actually much less likely to figure out what’s true. People are justified in ignoring and downvoting you if you don’t stick to them.
It’s also possible that there’s more than one set of truth-seeking norms, but that doesn’t mean it’s easy to communicate across them. So better to say “over here, we operate in X, if you want to participate, please follow X norms. And I think that’s legit.
Of course, this is very abstract and it’s possible you have examples I’d agree with.
Thank you so much. This website is fabulous!
I love it, thanks.
Hi, not sure where to write this but something happened to this post. Curious to read it but it looks like this right now for me:
Sorry about that! Fixed now.
The ‘latest welcome thread’ link should be updated to target the tag, since somehow that bit of automation didn’t get pushed back here.
Good suggestion! Done.
Was looking for some websites similar to academyofideas, turns out there are websites that are pure gems.
I actually prefer audio/video content to listen to while doing other physical things but this is great guys keep ut the good work there is a lot of content here, probably it will take a lifetime to finish this
Is there a LessWrong for dummies? How do humans with this level of intelligence engage in typical human relationships. So many less intelligent humans have superior insight based on simplistic common sense often overlooked by over analyzing. I’m a MoreRight mindset over a LessWrong. Another site named WrongPlanet had snippets aligned to earlier theoretical AI and most contributors labeled themselves AS. I love an AS higher intelligence mindset but so much is lacking in the design of AI when significant ‘typical’ contributions are necessary for sustainable design to integrate in typical human life. AI, if taken to a next level of basic old brain underlying the high functioning new brain 🧠 and designed to replicate personality and physical traits would be a goal.
All right! I thought I’d give this a whirl. I’ve had a few words for M. Eliezer S. Yudkowsky on Twitter, or on “X as envisioned by the deathless genius of Elon Musk” I should say. Of course I never got any response to the words but I was never expecting one so that’s all right! I believe that my friend Monophylos (or Mono the Unicorn) can say much the same.
Is this place actually active? It looks like it might be, at a trickle; I can’t imagine the popularity of this “dark intellectual” stuff has been doing so well lately, especially now that everyone gets to see what it’s done to J. D. Vance. (Hoo boy there’s something wrong with that kid.)
Anyway! Looking forward to the lively Debate™, or to getting summarily booted out. One of those two!
~Chara of Pnictogen 🔴
I think we need an actual style guide, and it needs to be prominent, properly maintained, and right here.
If it’s not obvious why, and I weakly presume it isn’t, it’s because linguistic standardization seems like the obvious group-context form of linguistic precision, which seems like an obvious rationality virtue.
Thoughts?
There’s something of a style guide for wiki-tagging (see the FAQ).
For the site more broadly, I fear that any explicit style guide it would be possible to write would be too prescriptive and narrow. There’s a wide variety of styles that suitable for the site, albeit that there’s an even wider variety that isn’t.
In the practice, the best style guide are the great posts already on LessWrong. That’s why we encourage new users to read quite a bit before posting. By reading, you get a sense of the LW discourse style.
Welcome to LessWrong!
We find ourselves in a perpetual tug-of-war between a desire for more reliable, higher quality posts and the ability of people to engage and contribute at all. The trade-off is this:
The higher the standard, whether style or rigor, the fewer people will write posts. To our dismay, this includes people who would actually meet the standards but fear that they would not beforehand. Naturally the potential contributions from people below the requirements are lost.
While this makes each post more productive to read, it also means that each post is higher-effort to read, which to our dismay often means posts stop being engaged with; we run the risk of churning out a small amount of posts which are very high quality but very poorly read.
So striking that balance prevents us from setting much in the way of style standards; we usually prefer to let the community speak which rewards multiple styles. I myself am on the write early, write often side of the fence.
The mods may have a more nuanced and up-to-date opinion with respect to meta information like writing guides.
Knowledge with certainty is possible. Knowledge with certainty is justified knowledge. There is a method to arrive at justified knowledge.
It is impossible that truth is impossible. It is impossible that existence is impossible. True + exist is my definition of real. It is impossible that real is impossible. It is impossible that reality is impossible.
Justified knowledge certainty is not only possible, it is necessary, it could not, not exist. It is necessary that we can know the truth about existence, precisely because true and exist are real, and real means true and exist are ultimately simultaneous = everywhere all-at-once, even if only some of us know that with certainty. If someone does not know this, that is what ignorance is.
LessWrong is an excellent platform for writers who think deeply about truth.
I have great respect for what the LessWrong platform is all about, but I believe it would be instructive to deconstruct the choice of name for the platform.
The reveal for my decision to deconstruct the name LessWrong is that the name is itself necessarily a fatal logical contradiction, i.e., infinite regress. Fatal logical infinite regress is certainly not a ground for truth, nor a ground for certainty, nor a ground for justified knowledge.
Less is a degree of the category wrong. Less-wrong-to-infinity is still wrong, therefore, infinite regress.
Incremental gains of knowledge are normal and necessary, but always wrong is certainly not.
In fact, we do not go from wrong to less wrong, we go from knowledge to more knowledge, and as necessary, change our minds about what we know, based upon new information.
Justified knowledge can only be grounded in a set of natural a-priori axioms that are not the result of any empirical observation; you either see them or you do not. Nor are they subject to any kind of proof, e.g., some imagined empirical test, or mathematical proof, nor are they disprovable. All further discourse about existence and truth depends upon a set of natural a-priori axioms.
See my Substack posts for an expanded discussion:
Less Wrong platform and author Yudkowsky, on Rationality and Justified Knowledge Certainty
https://allink.substack.com/p/justified-knowledge-certainty-and-f4b
Natural a-priori Axioms
https://allink.substack.com/p/justified-knowledge-certainty-and
It was deleted (a long time ago, when it was new) because it’s a fork hazard: that is, “either false or infohazardous” (in fact false). I don’t know who made the website you linked to, but they’re an idiot: working to make infohazards seem cool is straightforwardly a bad thing to do.
The fact its now on the blockchain for ever also is extremely Dangerous
That’s what I’m thinking, the blockchain has a way of making these ideas viral. I’m already seeing cult like vibes on Twitter.
This is dangerous, i know personally how people catch onto things in crypto and goes viral. Also you cant remove things from the blockchain so even more so.. seems they are gaining traction on twitter also