What is Rationality?
This article is an attempt to summarize basic material, and thus probably won’t have anything new for the experienced crowd.
Related: 11 Core Rationalist Skills, What is Bayesianism?
Less Wrong is a blog devoted to refining the art of human rationality, but what is rationality? Rationality is unlike any subject I studied at school or university, and it is probably the case that the synthesis of subjects and ideas here on Less Wrong is fairly unique.
Fundamentally, rationality is the study of general methods for good decision-making, especially where the decision is hard to get right. When an individual is considering whether to get a cryonics policy, or when a country is trying to work out what to do about global warming, one is within the realm of decision-making that we can use rationality to improve. People do badly on hard decision problems for a variety of reasons, including: that they are not born with the ability to deal with the scientific knowledge and complex systems that our modern world runs on, that they haven’t been warned that they should think critically about their own reasoning, that they belong to groups that collectively hold faulty beliefs, and that their emotions and biases skew their reasoning process.
Rationality is the ability to do well on hard decision problems.
Another central theme of rationality is truth-seeking. Truth-seeking is often used as an aid to decision-making: if you’re trying to decide whether to get a cryonics policy, you might want to find out whether the technology has any good evidence suggesting that it might work. We can make good decisions by getting an accurate estimate of the relevant facts and parameters, and then choosing the best option according to our understanding of things; if our understanding is more accurate, this will tend to work better.
Often, the processes of truth-seeking and decision-making, both on the individual level and the group level are subject to biases: systematic failures to get to the truth or to make good decisions. Biases in individual humans are an extremely serious problem—most people make important life-decisions without even realizing the extent and severity of the cognitive biases they were born with. Therefore rational thought requires a good deal of critical thinking—analyzing and reflecting on your own thought processes in order to iron out the many flaws they contain. Group dynamics can introduce mechanisms of irrationality above and beyond the individual biases and failings of members of the group, and often good decision-making in groups is most severely hampered by flawed social epistemology. An acute example of this phenomenon is The Pope telling HIV infested Africa to stop using condoms; a social phenomenon (religion) was responsible for a failure to make good decisions.
Perhaps the best way to understand rationality is to see some techniques that are used, and some examples of its use.
Rationality techniques and topics include:
- Following through with simple logical inferences and numerical calculations—A surprising number of bad decisions and conclusions can be avoided by doing relatively simple pieces of logical reasoning without error or flinching in the face of the conclusion. Common general examples include non sequiturs such as affirming the consequent, argument from fallacy, and taking absence of evidence as certitude of absence (“I haven’t found any evidence for it therefore it can never happen” type reasoning). Many bad decisions also result from not doing simple arithmetic, or not taking into account quantitative reasoning. See this website on environmentalism gone wrong due to a failure to reason quantitatively.
Heuristics and biases—Perhaps the key insight that started the Overcoming Bias and Less Wrong blogs was the mounting case from experimental psychologists that real human decision-making and Belief formation is far from the ideals of economic rationality and Bayesian probability. A key reference on the subject is Judgment under Uncertainty: Heuristics and Biases. For those who prefer the web, Less Wrong has a set of articles tagged “standard biases”. If you know your own flaws, you may be able to correct for them—this is known as debiasing.
Evolutionary psychology and Evolutionary theory—In his bestselling book Fooled by Randomness, Nassim Taleb writes “Our minds are not quite designed to understand how the world works, but, rather, to get out of trouble rapidly and have progeny”. Understanding that the process that produced you cared only about inclusive genetic fitness in the ancestral environment, rather than your welfare or ability to believe the truth can help to identify and iron out flaws in your decision-making. There is a good sequence on evolution on Less Wrong. Perhaps the most important piece of work on the implications of evolutionary theory for decision-making and rationality is Bostrom and Sandberg’s Wisdom of Nature; although it purportedly aims at assessing human enhancement options, the style of reasoning is highly applicable to thinking about how to deal with the mixed blessings that evolution put inside our skulls.
Defeating motivated cognition—Many specific instances and types of biased reasoning are probably created by the same set of sources, often processes deeply intertwined with our evolved psychology. The most pernicious of these “sources of biased reasoning” is motivated cognition, the king of biases. The human mind seems to have a way of short-circuiting itself whereby happy emotions come when you visualize an outcome that is good for you, and this causes you to search for arguments that support the conclusion that that good outcome will occur. This kind of “bottom line reasoning” is insidious, and decreasing the extent to which you suffer from it is a key way to increase your rationality. Leaving a line of retreat is one good antidote. There is a whole sequence on how to actually change your mind that attempts to beat this problem.
Techniques of analytic philosophy—Analytic philosophers have spent a long time honing techniques to promote better thinking, especially about conceptually confusing subjects. They will often be very careful to explicitly define key terms they use, and be open and upfront about terms that they take as primitive, as well as being clear about the structure of their arguments.
Bayesian statistics and the Bayesian mindset—Covered expertly in the article “What is Bayesianism?”—briefly, the idea is that the beliefs of an ideal rational agent are formed by a process of formulating hypotheses, assigning prior credence to each, and then using Bayes’ theorem to work backwards from the data to work out how likely various hypotheses are, given the data. In cases where there is “overwhelming evidence”, the strictures of Bayes’ theorem are unnecessary: it will be obvious which hypothesis is true. For example, you do not need Bayes’ theorem to deduce that Gary Kasparov would beat your grandmother at chess. Related to this are various errors and lies that can arise from bad (or deliberately misleading) statistical analyses.
Microeconomic ways of thinking—Microeconomics models rational agents as aiming to make good personal choices subject to resource constraints. Von Neumann and Morgenstern proved an important theorem stating that the preferences of a “rational” agent can be expressed as a utility function. Other researchers in microeconomics made significant advances by considering the marginal utility of actions—how much better do things get if one shifts one dollar of one’s expenditure from buying ice-cream to buying clothes? The notion of opportunity cost is a classic example of a microeconomic concept. Value of information is another. In recent times, microeconomics has taken human psychology into account more, leading to formal theories of boundedly rational and irrational agents, such as prospect theory.
Game theory and signaling games—a sub-field of microeconomics so important that it deserves a separate mention, game theory analyzes the interactions between competing rational agents in a formal way. The key intuition pump is the prisoner’s dilemma, but I think that the the formal analysis of signaling games is even more important for rationality, as signaling games explain so much about why people verbally endorse statements (the statement is there as a signal, not as an indicator of rational belief). Robin Hanson of Overcoming Bias has posted many times on how the subconscious human desire to signal affects our decision-making in weird ways.
Creating good social epistemology and norms of rationality—several sequences on Less Wrong are about how to create an atmosphere that encourages honest and productive social epistemology. Resisting groupthink and cultishness is important step, as is dealing with the problem that politics tends to make humans stupid; this is covered in the “politics is the mind killer” sequence. Finally there is a sequence on creating good rationalist communities.
- What is missing from rationality? by 27 Apr 2010 12:32 UTC; 27 points) (
- 12 Jul 2012 20:20 UTC; 0 points) 's comment on Rational Ethics by (
Re: Rationality is the ability to do well on hard decision problems.
That sounds like the definition of intelligence—though it skips the “range of problems” bit. The “range of problems” qualification seems to be doing useful and desirable work there—though do we really want two separate terms meaning practically the same thing?
Certainly rationality as defined here is within the fuzzy cloud of groundings of the rather vague word “intelligence”. However, it is probably closer in meaning to “wisdom”.
Rationality differs from intelligence as commonly used in that intelligence in humans is commonly judged on abstract problems in situations of certainty, such as IQ tests, and frequently involves comparative assesments (IQ, exams) under time pressure and with tuition and preparation. Rationality typically deals with real-world problems with all the open-endedness that entails, in situations of uncertainty and without tuition or practice.
My take on this issue is as follows:
There are a whole bunch of mental faculties—skill at which scientists have demonstrated is represented pretty well by a single quantity, g. That result leaves remarkably little space for a different, but nearby concept of instrumental rationality.
So: terminology in this area appears to be in quite a mess—with two historically well-known terms jostling together in a space where scientists are telling us there is only really one thing. Maybe there are three jostling terms—if you include “reason”.
So: we need some philosophers of science to wade in and propose some resolutions to this mess.
There is a recent book, What Intelligence Tests Miss: The Psychology of Rational Thought, by Keith Stanovich which argues that the g measured by IQ tests actually leaves out many other cognitive skills that are necessary for epistemic and instrumental rationality.
He proposes to use “intelligence” to refer to g, and “rationality” for epistemic and instrumental rationality, but since our community is perhaps more closely linked to the field of AI than to psychology, I don’t know if we want to follow that advice.
The certainly looks like a relevant book. I didn’t like some of Keith Stanovich’s earlier offerings much, and so probably won’t get on with it, though :-(
Reading summaries makes me wonder whether Stanovich have any actual evidence of important general intellectual capabilities that are not strongly correlated with Spearman’s g.
It is easy to bitch about IQ tests missing things. The philosophy behind IQ tests is that most significant mental abilities are strongly correlated—so you don’t have to measure everything. Instead people deliberately measure using a subset of tests that are “g-loaded”—to maximise signal and minimise noise. E.g. see:
http://en.wikipedia.org/wiki/General_intelligence_factor#Mental_testing_and_g
He cites a large number of studies that show low or no correlation between IQ and various cognitive biases (the book has very extensive footnotes and bibliographies), but I haven’t looked into the studies themselves to check their quality.
Right—well, there are plenty of individual skills which are poorly correlated with g.
If you selected a whole bunch of tests that are not g-loaded, you would have similar results.
What you would normally want to do is to see what they have in common (r) and then see how much variation in common cognitive functioning is explained by r.
The classical expectation would be: not very much: other general factors are normally thought to be of low significance—relative to g.
The other thing to mention is that many so-called “cognitive biases” are actually adaptive. Groupthink, the planning fallacy, restraint bias, optimism bias, etc. One would expect that many of these would—if anything—be negatively correlated with other measures of ability.
Thanks, Wei, that’s very useful.
What is the correlation between income and IQ? Wikipedia says:
To me, this indicates that there is something other than IQ (==g) that governs real-world performance.
The claim for g is that it is by far the best single intellectual performance indicator. That is not the same as the idea that it accounts for most of the variation. There could be lots of variation that is governed by many small factors—each of low significance.
From the cited page:
“Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4” …and… “Daniel Seligman cites an IQ income correlation of 0.5″
I am not entirely sure about the implicit idea that people’s aim in life is to make lots of money either. Surely it is more reasonable to expect intelligent people—especially women—to trade off making income against making babies—in which case, this metric is not a very fair way of measuring their abilities, because it poorly represents their goals.
One option is to ditch “instrumental rationality” as a useless and redundant term—leaving “rationality” meaning just “epistemic rationality”.
Another observation from computer-science is that we have the separate conceptions of memory and CPU. Though there isn’t much of a hardware split in brains, we can still employ the functional split—and discuss and test memory and processing capabilities (somewhat) separately. Other computer-science-inspired attributes of intelligent agents are serial speed and degree of parallelism. If we are attempting to subdivide intelligence, perhaps these are promising lines along which to do it.
I think I can see why instrumental rationality could be regarded as just part and parcel of epistemic rationality. Once the probabilities have been rationally evaluated, what work is left for “instrumental reason” to do? Am I on the right track at all? If not, please elaborate.
One option is to ditch “instrumental rationality” as a useless and redundant term—leaving “rationality” meaning just “epistemic rationality”.
I see the distinction between intelligence and rationality as assuming a model of an agent with a part that generates logical information and a part that uses the logical information to arrive at beliefs and decisions, with “intelligence” defined as the quality of the former part and “rationality” defined as the quality of the latter part. In the latter case “quality” turns out to mean something like “closeness to expected utility maximization and probability theory”.
That makes intelligence an internal sub-module, that is some distance from actions, and so can’t directly be measured by tests. That is not what most scientists use the term to mean, I believe.
“rationality is the study of general methods for good decision-making”
“Rationality is the ability to do well on hard decision problems.”
“Rationality is also the art of how to systematically come to know what is true.”
It seems like three separate meanings! Is such overloading of meanings desirable?
Iaijutsu includes:
how to not die when angry people come at you with sharp things unexpectedly
how to bring your sword out of it’s sheath quickly
how to cut stuff
That might seem like three separate skills, but they’re interrelated enough to be taught as a single school.
Wikipedia doesn’t really seem to agree with that:
http://en.wikipedia.org/wiki/Iaijutsu
It basically defines Iaijutsu as the art of drawing your sword.
That seems like one meaning, not three.
Chemistry includes the use of bunsen burners, test tubes, and acids. However, that is not the definition of chemistry.
So: I am not convinced you have got this example straight.
Re: Fundamentally, rationality is the study of general methods for good decision-making [...]
It’s about studying decision making—rather than being about actually making decisions?
I like the following pair of concise definitions:
Logic = internally consistent structure;
Reason = objectively verifiable logic.
So rationality is a subset of operations taking place within the (assumed-to-exist) structural logic of cognition/intelligence, which itself is subject to rational investigation.
An overlapping subset is the subjective emotional system, which presumably has its own logical architecture. My own interest is centered on possible ways to logically relate/integrate these two systems.
A necessary link: What Do We Mean By “Rationality”?.
I think it is worth noting some of the posts criticizing game theory on this blog—particularly Newcomb’s Problem and Regret of Rationality.