The road to wisdom? Well, it’s plain and simple to express:
Err and err and err again but less and less and less.
– Piet Hein
LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.
Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.
LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We’re very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].
To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka “The Sequences”) or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what’s actually true (not just what’s convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka “The Sequences”). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights
Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.
Welcome to LessWrong!
and simple to express:
Err
and err
and err again
but less
and less
and less.
– Piet Hein
LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.
See also our New User’s Guide.
Training Rationality
Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.
LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We’re very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].
To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka “The Sequences”) or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what’s actually true (not just what’s convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
Check out the Concepts Portal to find essays on topics such as artificial intelligence, history, philosophy of science, language, psychology, biology, morality, culture, self-care, economics, game theory, productivity, art, nutrition, relationships and hundreds of other topics broad and narrow.
LessWrong and Artificial Intelligence
For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that’s only accelerated recently with further AI capabilities developments.
LessWrong is also integrated with the Alignment Forum
The LessWrong team who maintain and develop the site are predominantly motivated by trying to cause powerful AI outcomes to be good.
If you want to see more or less AI content, you can adjust your Frontpage Tag Filters according to taste[3].
Getting Started on LessWrong
The New User’s Guide is a great place to start.
The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka “The Sequences”). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights
Other top writings include The Codex (writings by Scott Alexander) and Harry Potter & The Methods of Rationality. Also see the Library Page for many curated collections of posts and the Concepts Portal.
Also, feel free to introduce yourself in the monthly open and welcome thread!
Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.
Thanks for your interest!
- The LW Team
Related Pages
LessWrong FAQ
A Brief History of LessWrong
Team
LessWrong Concepts
Definitions of Rationality as used on LessWrong include:
- Rationality is thinking in ways that systematically arrive at truth.
- Rationality is thinking in ways that cause you to systematically achieve your goals.
- Rationality is trying to do better on purpose.
- Rationality is reasoning well even in the face of massive uncertainty.
- Rationality is making good decisions even when it’s hard.
-Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.
There are in fact laws of thought no less ironclad than the law of physics [source].
Hover your mouse over the tags to be able to adjust their weighting in your Latest Posts feed.