These are not my arguments, since I haven’t thought about the issue enough. However, the anthropologist Scott Atran, in response to the latest Edge annual question, “What Scientific Idea is Ready for Retirement?”, answered “IQ”. Here’s his response:
There is no reason to believe, and much reason not to believe, that the measure of a so-called “Intelligence Quotient” in any way reflects some basic cognitive capacity, or “natural kind” of the human mind. The domain-general measure of IQ is not motivated by any recent discovery of cognitive or developmental psychology. It thoroughly confounds domain-specific abilities—distinct mental capacities for, say, geometrical and spatial reasoning about shapes and positions, mechanical reasoning about mass and motion, taxonomic reasoning about biological kinds, social reasoning about other people’s beliefs and desires, and so on—which are the only sorts of cognitive abilities for which an evolutionary account seems plausible in terms of natural selection for task-specific competencies.
Nowhere in the animal or plant kingdoms does there ever appear to have been natural selection for a task-general adaptation. An overall measure of intelligence or mental competence is akin an overall measure for “the body,” taking no special account of the various and specific bodily organs and functions, such as hearts, lungs, stomach, circulation, respiration, digestion and so on. A doctor or biologist presented with a single measure for “Body Quotient” (BQ) wouldn’t be able to make much of it.
IQ is a general measure of socially acceptable categorization and reasoning skills. IQ tests were designed in behaviorism’s heyday, when there was little interest cognitive structure. The scoring system was tooled to generate a normal distribution of scores with a mean of 100 and a standard deviation of 15.
In other societies, a normal distribution of some general measure of social intelligence might look very different, in that some “normal” members of our society could well produce a score that is a standard deviation from “normal” members of another society on that other society’s test. For example, in forced-choice tasks East Asian students (China, Korea, Japan) tend to favor field-dependent perception over object-salient perception, thematic reasoning over taxonomic reasoning, and exemplar-based categorization over rule-based categorization.
American students generally prefer the opposite. On tests that measure these various categorization and reasoning skills, East Asians average higher on their preferences and Americans average higher on theirs’. There is nothing particularly revealing about these different distributions other than that they reflect some underlying socio- cultural differences.
There is a long history of acrimonious debate over which, if any, aspects of IQ are heritable. The most compelling studies concern twins raised apart and adoptions. Twin studies rarely have large sample populations. Moreover, they often involve twins separated at birth because a parent dies or cannot afford to support both, and one is given over to be raised by relatives, friends or neighbors. This disallows ruling out the effects of social environment and upbringing in producing convergence among the twins. The chief problem with adoption studies is that the mere fact of adoption reliably increases IQ, regardless of any correlation between the IQs of the children and those of their biological parents. Nobody has the slightest causal account of how or why genes, singly or in combination, might affect IQ. I don’t think it’s because the problem is too hard, but because IQ is a specious rather natural kind.
Which of reality, validity, and usefulness is this an argument against? All three? None?
Added: I don’t know what it would mean for IQ to be “real.” Maybe this is an argument that IQ is not real. Maybe it is an argument that IQ is not ontologically fundamental. But it seems to me little different than arguing that total body weight, BMI, or digit length ratio are not “real”; or even that arguing that temperature is not “real,” either temperature of the body or temperature of an ideal gas. The BQ sentence seems to assert that this kind of unreality implies that IQ is not useful, but I’d hardly call that an argument.
I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?” This doesn’t require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.
According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn’t? I don’t know enough about the relevant science to make that judgment.
Anyway, given my preferred pragmatist way of thinking about ontology, there isn’t much difference between the reality, validity and usefulness of a concept.
I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?”
It seems excessive to me to define real as a superlative. Isn’t it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It’s not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.
I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.
Lorentz ether theory is a good predictive theory, but I don’t want to say that ether is real. In general, if there’s a better theory currently available that doesn’t include property X, I’d say we’re justified in rejecting the reality of X.
I do agree that if there’s no better theory currently available, it’s a bit weird to say “I reject the reality of X because I’m sure we’re going to come up with a better theory at some point.” Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it’s probably a good idea to withhold ontological commitment until a better theory comes along.
Again, I don’t know enough about IQ research to judge which, if any, of these scenarios holds in that field.
These are not my arguments, since I haven’t thought about the issue enough. However, the anthropologist Scott Atran, in response to the latest Edge annual question, “What Scientific Idea is Ready for Retirement?”, answered “IQ”. Here’s his response:
Which of reality, validity, and usefulness is this an argument against? All three? None?
Added: I don’t know what it would mean for IQ to be “real.” Maybe this is an argument that IQ is not real. Maybe it is an argument that IQ is not ontologically fundamental. But it seems to me little different than arguing that total body weight, BMI, or digit length ratio are not “real”; or even that arguing that temperature is not “real,” either temperature of the body or temperature of an ideal gas. The BQ sentence seems to assert that this kind of unreality implies that IQ is not useful, but I’d hardly call that an argument.
I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?” This doesn’t require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.
According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn’t? I don’t know enough about the relevant science to make that judgment.
Anyway, given my preferred pragmatist way of thinking about ontology, there isn’t much difference between the reality, validity and usefulness of a concept.
It seems excessive to me to define real as a superlative. Isn’t it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It’s not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.
I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.
Lorentz ether theory is a good predictive theory, but I don’t want to say that ether is real. In general, if there’s a better theory currently available that doesn’t include property X, I’d say we’re justified in rejecting the reality of X.
I do agree that if there’s no better theory currently available, it’s a bit weird to say “I reject the reality of X because I’m sure we’re going to come up with a better theory at some point.” Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it’s probably a good idea to withhold ontological commitment until a better theory comes along.
Again, I don’t know enough about IQ research to judge which, if any, of these scenarios holds in that field.