Is g a measure of ability to absorb information in a non-inductive way?
Eliezer and Robin discussed g somewhat in their debate. I think this question is one that we can do some more research on ourselves. The current hypothesis I’m exploring is that g measures the ability to take in information non-inductively this includes gossip, culture and taught skills.
If g is a real phenomenon then it has to be explained in some form. What is it that some people possess that enables them to do better in a vast range of activities than other people? It seems likely that it is ability to absorb information from the world in some sense.
When I was studying machine learning I was exhorted to put all the information about a problem domain in the learning algorithm, to make it learn efficiently. Now it was supposed to be learning about the world, so it seemed backwards that I needed to tell it about the problem to make it work. And each problem would need different information. So from this view there shouldn’t be one factor that makes a system efficient over large numbers of problems. There is however one regularity about information in the world that could be exploited to help solve other problems, there is lots of information generated by humans stored in standard formats. So if this skill increased, you would expect it to have knock on effects in solving lots of other problems.
I’m avoiding calling it social or linguistic information, as that seems to have connotations of solving social and linguistic problems, where as this type of information can help in solving pattern recognition problems. For example learning the concept of a triangle, that it’s triangle-ness is invariant to rotation, scaling etc, can help in IQ tests where changes are made to pictures of triangles and you have to spot the commonalities.
There is a sea of useful information out there in linguistic form. Maths and languages are taught linguistically, we memorise times tables and the letters of the alphabet. Even rationality as practised by lesswrong is counted in this class. What happens if you aren’t very good at absorbing this vast wealth for some reason? It would be reasonable to expect that you would be less competent in general than your fellow man. Being able to “understand” concepts or formula is closely associated with the lay meaning of intelligence.
So what can we predict about g if this view is correct?
Taking in non-statistical social information is a multi-stage process. You have to do things like parse the sentence or symbols, examine it against your other beliefs for consistency, follow the logical conclusions of the belief and update it and store it so that it can be used in this process for other beliefs. And probably many other steps that I cannot guess at. Any of these might break slightly reducing the efficacy of the whole process. Only the first step I mentioned is unique to human intelligence, the other steps occur if you infer a belief and want to check it against your other knowledge.
So many genes for g might be present and can break, and are probably present in other mammals.
Looking at the genetics of the g, it seems that it is highly heritable in some circumstances and that multiple different genes do have an effect on it[1]. Which is consistent with this hypothesis. We haven’t narrowed down the locus’ according to that article, I’ll try and find something more recent in a while. The review article by Dreary referenced above has a slightly less specific hypothesis.
Bouchard (1997) has proposed that we inherit not intellectual capacity as such; rather, species-typical affective-motivational systems shaped by the environment of evolutionary adaptation that drive both capacity and preferences. Following Hayes (1962), he suggested that manifest intelligence is the demonstration of skills and knowledge accumulated during the experiences created by these affective-motivational systems.
It goes on to mention that the Flynn effect is consistent with this, that society can accumulate better knowledge which then gets passed onto humans which show.
I’m going to do some more reading of references and will update this post as time allows, but I would be interested to know of alternate explanations. Then perhaps we can see which position of the debate the best explanation supports.
Edited to try and clear up confusions − 5th July
[1]Genetic foundations of human intelligence
IJ Deary, W Johnson… − 2009
- 19 Jul 2011 10:35 UTC; 5 points) 's comment on Connectionism: Modeling the mind with neural networks by (
If g is about social intelligence, why does it sometimes seem to be practically inversely correlated with ‘emotional intelligence’ and other such things? Those with high IQ are not known for their keen social skills.
This is a popular stereotype, both among those without high IQ and those with, but is there any evidence? I cannot say I have noticed this among the people I know. On the contrary, if anything, the keenest minds I have been fortunate enough to meet have generally been socially successful as well.
Alright, I’ll put it another way. This theory suggests an extremely high correlation between social skills and intelligence, perhaps as high as 1. Let’s say 0.9. The average LWer, if I remember the survey results right, tends to have IQs in the 120-150 range, putting them in a high percentile of the populace, perhaps in the top 3% or so. Would you say the average LWer is even in the top 10% of the populace for social skills?
Something has gone wrong somewhere if you think that is my theory.
Social information is not supposed to be equal to body language/subtext.
I’ll change the wording.
I don’t know what theory you’re referring to, or where that figure of 1 or 0.9 comes from. It certainly doesn’t come from me. Neither do I know enough LWers well enough to form any judgment of their average competence.
g isn’t actually inversely correlated with social or emotional intelligence. At least up to 3 standard deviations above the mean, they are positively correlated.
If it seems that way, it’s probably because nerdy/autism spectrum type people choose to emphasize their intellectual ability. Think “comparative advantage” as opposed to “absolute advantage”.
My understanding is that high IQ per se is positively correlated with social skills (as well as with physical attractiveness, health and lifespan, even after correcting for lifestyle). It’s a different story when you look at technical intelligence of the high functioning Asperger’s variety, of course; the reason seems to be that this kind of technical intelligence is basically a combination of high IQ with a trading away of social aptitude—literally, a reallocation of some computing power that would normally be dedicated to that function, like a computer designer spending fewer transistors on the GPU to be able to spend more on the CPU.
Bearing in mind that the style and content of discussion on this site tends to specifically attract people with that sort of technical intelligence (geeks, to use the colloquial term for us), surveys on Less Wrong shouldn’t be treated as representative of high IQ people in general.
That’s a plausible explanation. But that suggests a lot of predictions. For example, if I’m following your analogy right, we ought to see that the subpopulation which has high IQ scores and low math scores would also have higher social skills, since the high IQ proves they have lots of transistors but the low math scores show they haven’t spent any transistors on the GPU. Do they?
It would certainly seem that they should. Anecdotally, it seems to me the answer is yes, but I don’t know offhand whether statistical evidence has been gathered.
Hmm maybe “ability to absorb social information” was a bit of poorly chosen terminology. I didn’t mean to imply that people with it would be good at solving social problems. I wanted to contrast book reading and following a teacher with inferring information from experience or experiments. What pigeons sometimes beat humans at. The first two are social because they involve an author and a teacher, where as the latter are asocial, no other agent has to be involved.
Perhaps “linguistic information” would have been better.
Linguistic information isn’t much better, because that sounds like ‘crystallized’ intelligence as opposed to ‘fluid’ intelligence, where the most common tests—the matrix tests—make a determined effort to avoid anything even remotely like verbal or linguistic material.
You seem to be trying to equate intelligence as the ability to generalize and learn from experience (“inferring information from experience or experiments”) as opposed to the ability to follow received rules (“book reading and following a teacher”). Your use of “social” in this context is irrelevant.
ADDED: Also Frank Herbert said something similar in Chapterhouse:Dune
What if g isn’t real and there actually isn’t any general intelligence, it just looks that way because of the diversity of selection pressures on areas of the brain?
Why is this inconsistent with g?
Because in that case there is no general intelligence for g to measure.
g is the ability to process complex information. It’s certainly possible that if selection pressures caused improvements in the “math” part of brains but because of tradeoffs caused a decrease in the “hand/eye coordination” part of the brain then g as measured by intelligence tests would go up.
Sorry, this doesn’t make sense to me. My objection is that the information processing doesn’t have to be portable. If it’s not portable, then it’s not general. If it’s not general, there’s no g.
That’s all.
Any such discussion ought to reference, for completeness, the hypothesis that g doesn’t actually measure anything.
That doesn’t link to a post contending that g doesn’t measure anything. Simply that it is very hard to do heritability studies of it.
Both g as linguistic knowledge absorption and g as nothing would favour Robin’s side of the argument I think. An AI that was utterly wonderful at linguistic knowledge absorption would not necessarily be able to make bio-nanotech without doing further experimentation. As society may not have all the information required (I’m thinking catalogues of the hostile bacteria that the nanotech would have to survive).
Morendil probably meant to link to this article instead:
http://cscs.umich.edu/~crshalizi/weblog/523.html
That one was discussed on LW a while ago, though. Sadly, instead of using his extraordinary intellectual powers and knowledge of statistics to clarify these muddled issues, the author instead ended up creating what amounts to a piece of very clever propaganda for his favored side in the controversy.