Re: Rationality is the ability to do well on hard decision problems.
That sounds like the definition of intelligence—though it skips the “range of problems” bit. The “range of problems” qualification seems to be doing useful and desirable work there—though do we really want two separate terms meaning practically the same thing?
Certainly rationality as defined here is within the fuzzy cloud of groundings of the rather vague word “intelligence”. However, it is probably closer in meaning to “wisdom”.
Rationality differs from intelligence as commonly used in that intelligence in humans is commonly judged on abstract problems in situations of certainty, such as IQ tests, and frequently involves comparative assesments (IQ, exams) under time pressure and with tuition and preparation. Rationality typically deals with real-world problems with all the open-endedness that entails, in situations of uncertainty and without tuition or practice.
There are a whole bunch of mental faculties—skill at which scientists have demonstrated is represented pretty well by a single quantity, g. That result leaves remarkably little space for a different, but nearby concept of instrumental rationality.
So: terminology in this area appears to be in quite a mess—with two historically well-known terms jostling together in a space where scientists are telling us there is only really one thing. Maybe there are three jostling terms—if you include “reason”.
So: we need some philosophers of science to wade in and propose some resolutions to this mess.
There are a whole bunch of mental faculties—skill at which scientists have demonstrated is represented pretty well by a single quantity, g. That result leaves remarkably little space for a different, but nearby concept of instrumental rationality.
There is a recent book, What Intelligence Tests Miss: The Psychology of Rational Thought, by Keith Stanovich which argues that the g measured by IQ tests actually leaves out many other cognitive skills that are necessary for epistemic and instrumental rationality.
So: we need some philosophers of science to wade in and propose some resolutions to this mess.
He proposes to use “intelligence” to refer to g, and “rationality” for epistemic and instrumental rationality, but since our community is perhaps more closely linked to the field of AI than to psychology, I don’t know if we want to follow that advice.
The certainly looks like a relevant book. I didn’t like some of Keith Stanovich’s earlier offerings much, and so probably won’t get on with it, though :-(
Reading summaries makes me wonder whether Stanovich have any actual evidence of important general intellectual capabilities that are not strongly correlated with Spearman’s g.
It is easy to bitch about IQ tests missing things. The philosophy behind IQ tests is that most significant mental abilities are strongly correlated—so you don’t have to measure everything. Instead people deliberately measure using a subset of tests that are “g-loaded”—to maximise signal and minimise noise. E.g. see:
He cites a large number of studies that show low or no correlation between IQ and various cognitive biases (the book has very extensive footnotes and bibliographies), but I haven’t looked into the studies themselves to check their quality.
Right—well, there are plenty of individual skills which are poorly correlated with g.
If you selected a whole bunch of tests that are not g-loaded, you would have similar results.
What you would normally want to do is to see what they have in common (r) and then see how much variation in common cognitive functioning is explained by r.
The classical expectation would be: not very much: other general factors are normally thought to be of low significance—relative to g.
The other thing to mention is that many so-called “cognitive biases” are actually adaptive. Groupthink, the planning fallacy, restraint bias, optimism bias, etc.
One would expect that many of these would—if anything—be negatively correlated with other measures of ability.
There are a whole bunch of mental faculties—skill at which scientists have demonstrated is represented pretty well by a single quantity, g. That result leaves remarkably little space for a different, but nearby concept of instrumental rationality.
What is the correlation between income and IQ? Wikipedia says:
Taking the above two principles together, very high IQ produces very high job performance, but no greater income than slightly high IQ.
IQ scores account for about one-fourth of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power.
To me, this indicates that there is something other than IQ (==g) that governs real-world performance.
The claim for g is that it is by far the best single intellectual performance indicator. That is not the same as the idea that it accounts for most of the variation. There could be lots of variation that is governed by many small factors—each of low significance.
From the cited page:
“Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4” …and… “Daniel Seligman cites an IQ income correlation of 0.5″
I am not entirely sure about the implicit idea that people’s aim in life is to make lots of money either. Surely it is more reasonable to expect intelligent people—especially women—to trade off making income against making babies—in which case, this metric is not a very fair way of measuring their abilities, because it poorly represents their goals.
One option is to ditch “instrumental rationality” as a useless and redundant term—leaving “rationality” meaning just “epistemic rationality”.
Another observation from computer-science is that we have the separate conceptions of memory and CPU. Though there isn’t much of a hardware split in brains, we can still employ the functional split—and discuss and test memory and processing capabilities (somewhat) separately. Other computer-science-inspired attributes of intelligent agents are serial speed and degree of parallelism. If we are attempting to subdivide intelligence, perhaps these are promising lines along which to do it.
I think I can see why instrumental rationality could be regarded as just part and parcel of epistemic rationality. Once the probabilities have been rationally evaluated, what work is left for “instrumental reason” to do? Am I on the right track at all? If not, please elaborate.
I see the distinction between intelligence and rationality as assuming a model of an agent with a part that generates logical information and a part that uses the logical information to arrive at beliefs and decisions, with “intelligence” defined as the quality of the former part and “rationality” defined as the quality of the latter part. In the latter case “quality” turns out to mean something like “closeness to expected utility maximization and probability theory”.
That makes intelligence an internal sub-module, that is some distance from actions, and so can’t directly be measured by tests. That is not what most scientists use the term to mean, I believe.
Re: Rationality is the ability to do well on hard decision problems.
That sounds like the definition of intelligence—though it skips the “range of problems” bit. The “range of problems” qualification seems to be doing useful and desirable work there—though do we really want two separate terms meaning practically the same thing?
Certainly rationality as defined here is within the fuzzy cloud of groundings of the rather vague word “intelligence”. However, it is probably closer in meaning to “wisdom”.
Rationality differs from intelligence as commonly used in that intelligence in humans is commonly judged on abstract problems in situations of certainty, such as IQ tests, and frequently involves comparative assesments (IQ, exams) under time pressure and with tuition and preparation. Rationality typically deals with real-world problems with all the open-endedness that entails, in situations of uncertainty and without tuition or practice.
My take on this issue is as follows:
There are a whole bunch of mental faculties—skill at which scientists have demonstrated is represented pretty well by a single quantity, g. That result leaves remarkably little space for a different, but nearby concept of instrumental rationality.
So: terminology in this area appears to be in quite a mess—with two historically well-known terms jostling together in a space where scientists are telling us there is only really one thing. Maybe there are three jostling terms—if you include “reason”.
So: we need some philosophers of science to wade in and propose some resolutions to this mess.
There is a recent book, What Intelligence Tests Miss: The Psychology of Rational Thought, by Keith Stanovich which argues that the g measured by IQ tests actually leaves out many other cognitive skills that are necessary for epistemic and instrumental rationality.
He proposes to use “intelligence” to refer to g, and “rationality” for epistemic and instrumental rationality, but since our community is perhaps more closely linked to the field of AI than to psychology, I don’t know if we want to follow that advice.
The certainly looks like a relevant book. I didn’t like some of Keith Stanovich’s earlier offerings much, and so probably won’t get on with it, though :-(
Reading summaries makes me wonder whether Stanovich have any actual evidence of important general intellectual capabilities that are not strongly correlated with Spearman’s g.
It is easy to bitch about IQ tests missing things. The philosophy behind IQ tests is that most significant mental abilities are strongly correlated—so you don’t have to measure everything. Instead people deliberately measure using a subset of tests that are “g-loaded”—to maximise signal and minimise noise. E.g. see:
http://en.wikipedia.org/wiki/General_intelligence_factor#Mental_testing_and_g
He cites a large number of studies that show low or no correlation between IQ and various cognitive biases (the book has very extensive footnotes and bibliographies), but I haven’t looked into the studies themselves to check their quality.
Right—well, there are plenty of individual skills which are poorly correlated with g.
If you selected a whole bunch of tests that are not g-loaded, you would have similar results.
What you would normally want to do is to see what they have in common (r) and then see how much variation in common cognitive functioning is explained by r.
The classical expectation would be: not very much: other general factors are normally thought to be of low significance—relative to g.
The other thing to mention is that many so-called “cognitive biases” are actually adaptive. Groupthink, the planning fallacy, restraint bias, optimism bias, etc. One would expect that many of these would—if anything—be negatively correlated with other measures of ability.
Thanks, Wei, that’s very useful.
What is the correlation between income and IQ? Wikipedia says:
To me, this indicates that there is something other than IQ (==g) that governs real-world performance.
The claim for g is that it is by far the best single intellectual performance indicator. That is not the same as the idea that it accounts for most of the variation. There could be lots of variation that is governed by many small factors—each of low significance.
From the cited page:
“Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4” …and… “Daniel Seligman cites an IQ income correlation of 0.5″
I am not entirely sure about the implicit idea that people’s aim in life is to make lots of money either. Surely it is more reasonable to expect intelligent people—especially women—to trade off making income against making babies—in which case, this metric is not a very fair way of measuring their abilities, because it poorly represents their goals.
One option is to ditch “instrumental rationality” as a useless and redundant term—leaving “rationality” meaning just “epistemic rationality”.
Another observation from computer-science is that we have the separate conceptions of memory and CPU. Though there isn’t much of a hardware split in brains, we can still employ the functional split—and discuss and test memory and processing capabilities (somewhat) separately. Other computer-science-inspired attributes of intelligent agents are serial speed and degree of parallelism. If we are attempting to subdivide intelligence, perhaps these are promising lines along which to do it.
I think I can see why instrumental rationality could be regarded as just part and parcel of epistemic rationality. Once the probabilities have been rationally evaluated, what work is left for “instrumental reason” to do? Am I on the right track at all? If not, please elaborate.
One option is to ditch “instrumental rationality” as a useless and redundant term—leaving “rationality” meaning just “epistemic rationality”.
I see the distinction between intelligence and rationality as assuming a model of an agent with a part that generates logical information and a part that uses the logical information to arrive at beliefs and decisions, with “intelligence” defined as the quality of the former part and “rationality” defined as the quality of the latter part. In the latter case “quality” turns out to mean something like “closeness to expected utility maximization and probability theory”.
That makes intelligence an internal sub-module, that is some distance from actions, and so can’t directly be measured by tests. That is not what most scientists use the term to mean, I believe.