Rationality is the art of thinking in ways that result in accurate beliefs and good decisions. It is the primary topic of LessWrong.
Rationality is not only about avoiding the vices of self-deception and obfuscation (the failure to communicate clearly), but also about the virtue of curiosity, seeing the world more clearly than before, and achieving things previously unreachable to you. The study of rationality on LessWrong includes a theoretical understanding of ideal cognitive algorithms, as well as building a practice that uses these idealized algorithms to inform heuristics, habits, and techniques, to successfully reason and make decisions in the real world.
Topics covered in rationality include (but are not limited to): normative and theoretical explorations of ideal reasoning; the capabilities and limitations of our brain, mind and psychology; applied advice such as introspection techniques and how to achieve truth collaboratively; practical techniques and methodologies for figuring out what’s true ranging from rough quantitative modeling to full research guides.
Note that content about how the world is can be found under World Modeling, and practical advice about how to change the world is categorized under World Optimization or Practical.
This list is not comprehensive! The tagging system is new. Many needed tags have not been created and/or added to the above list.
What we’re calling “rationality”
A good heuristic is that rationality is about cognitive algorithms. Rather than being a synonym for true or optimal, the term rational should be reserved for describing whether or not a cognitive algorithm results in true beliefs and optimal actions.
This is distinct from practical advice, such as how to improve relationships or implement productivity systems, which should not be considered “rationality” per se. Some have pushed against labeling self-help as “rational dating”, etc. for reasons along these lines [1, 2], and they are probably correct.
In accordance with this, LessWrong classifies most self-help type advice under the World Optimization tag and not the Rationality tag.
Similarly, most object-level material about how the world is, e.g. math, biology, history, etc. is tagged under World Modeling tag, with exceptions for neuroscience and probability theory, etc., which have concrete consequences for how one ought to think.
Heuristics and Biases
Early material on LessWrong frequently describes rationality with reference to heuristics and biases [1, 2]. Indeed, LessWrong grew out of the blog Overcoming Bias and even Rationality: A-Z opens with a discussion of biases [1] with the opening chapter titled Predictably Wrong. The idea is that human mind has been shown to systematically make certain errors of reasoning, like confirmation bias. Rationality then consists of overcoming these biases.
Apart from the issue of the replication crises which discredited many examples of bias that were commonly referenced on LessWrong, e.g. priming, the “overcoming biases” frame of rationality is too limited. Rationality requires the development of many positive skills, not just removing negative biases to reveal underlying perfect reasoning. These are skills such as how to update the correct amount in response to evidence, how to resolve disagreements with others, how to introspect, and many more.
Instrumental vs Epistemic Rationality
Classically, on LessWrong, a distinction has been made between instrumental rationality and epistemic rationality, however, these terms may be misleading – it’s not as though epistemic rationality can be traded off for gains in instrumental rationality. Only apparently, and to think one should do this is a trap.
Instrumental rationality is defined as being concerned with achieving goals. More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one’s preferences. Said preferences are not limited to ‘selfish’ preferences or unshared values; they include anything one cares about.
Epistemic rationality is defined as the part of rationality which involves achieving accurate beliefs about the world. It involves updating on receiving new evidence, mitigating cognitive biases, and examining why you believe what you believe. It can be seen as a form of instrumental rationality in which knowledge and truth are goals in themselves, whereas in other forms of instrumental rationality, knowledge and truth are only potential aids to achieving goals. Someone practicing instrumental rationality might even find falsehood useful.
The Art and Science of Rationality
In a field like biology, we can draw a distinction between the science of biology, which involves various theories and empirical data about biological life, and the art of being a biologist, which is the specific way that a biologist thinks and plays with ideas and interacts to the world around them. Similarly, rationality is both a science and an art. There’s study of the iron-clad laws of reasoning and mechanics of the human mind, but there’s also the general training to be the kind of person who reasons well.
Rationalist
The term rationalist as a description of people is used in a couple of ways. It can refer to someone who endeavors to think better and implement as much rationality as they can. Many prefer the term aspiring rationalist to convey that the identifier is a claim to the goal of being more rational rather than a claim of having attained it already.
Perhaps more commonly, rationalist is used to refer culturally to someone associated with various rationalist communities separate from their efforts to improve their rationality.
So here’s a patch that broke formatting, and no apparent revert action in the UI. How are such things supposed to be addressed? (Preview would help with preventing similar mistakes, which also doesn’t seem to be available in the UI.)
From the old discussion page. The comments concern the historical version of the page (see History of the page). Not the modern version. They are preserved for posterity.
Talk:Rationality
This is a brief analysis of the wiki page.
Claims made by the page:
(1) Rationality is the characteristic of thinking and acting optimally.
(2) An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has.
(3) For humans, this means mitigating (as much as possible) the influence of cognitive biases.
(4) Instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one’s preferences. Said preferences are not limited to ‘selfish’ preferences or unshared values; they include anything one cares about.
(5) Epistemic rationality is that part of rationality which involves achieving accurate beliefs about the world.
(6) It involves updating on receiving new evidence, mitigating cognitive biases, and examining why you believe what you believe.
(7) It can be seen as a form of instrumental rationality in which knowledge and truth are goals in themselves, whereas in other forms of instrumental rationality, knowledge and truth are only potential aids to achieving goals.
(8) Someone practising instrumental rationality might even find falsehood useful.
Support given and implied:
(1), (2), (4), (5), (6) could be taken as definitions, with (3), (7), (8) as their consequences. If this were done, it would need to be shown that (1), (2), (4), (5), (6) are good definitions. This would consist in the following. Firstly, a purpose for stating the definitions would be given. This purpose could be, for example, to capture our pre-theoretical use of “rationality”, in order to utilise our intuitions about statements involving “rationality” as evidence for more complex statements. Alternatively, this purpose could be, for example, to provide a technical definition of “rationality” which will be used later on, in the same way a mathematical symbol is used as a place holder for a more complex mathematical object. Secondly, the definitions would be shown to be consistent, well-formed and clear, each to an appropriate level depending on the purpose. For instance, if the purpose was to provide a technical definition, the definitions would need to be fully consistent, precise and formed out of other precise terms in a way consistent with those terms. If the purpose was to capture our pre-theoretical usage, the definitions would need to be consistent insofar as we believe our pre-theoretical usage, in the realm of the intuitions we wish to utilise, to be, as precise as our pre-theoretical usage in the relevant realm is, and to be as close as possible to our pre-theoretical usage in the relevant realm. Once all this was done, it would be shown that the definitions are good. Finally, it would need to be shown that (3), (7), (8) are indeed consequences of the definitions.
The blog posts linked may contain support for the claims in the manner outlined above, or in another manner.
Suggestions
The way in which the blog posts linked are intended to support the claims could be made explicit on the page. If any of the blog posts intend to support the claims by taking (1), (2), (4), (5), (6), or some combination of those, as definitions, the purpose of those definitions could be made explicit on the page.
Merge with Rationalism?
I would like to rename and merge this page with Rationalism—comments? Deku-shrub (talk) 04:07, 15 April 2017 (AEST)
The intro paragraph of this tag is more important than most tags, since it appears when you hover over the tag in the filters on the front page, or on a post page, and will be many peoples’ first exposure to the definition.