Roughly speaking, the IQ score measures one’s ability to recognize patterns, so it isn’t a direct measurement of intelligence per se, but of an ability that correlates strongly with several other abilities that people associate with the much fuzzier concept of intelligence.
If you practice for IQ tests, you’re going to become better at detecting the specific kinds of patterns used in IQ tests, but then your IQ score will correlate less with your general pattern-recognition ability, and in turn with those other traits, so at some point your score will stop reflecting your general intelligence.
To increase your intelligence as a whole you’d have to become better at recognizing more and more complex patterns in general, and not only for when you’re focusing on problems, but on the automatic, as a passive ability. That would require quite a lot of cerebral plasticity, which is something adults almost universally lack.
Now, having a great pattern recognition, and by extension a high IQ score, by itself, doesn’t suffice to say someone is actually intelligent in a broad sense, because when one is very good at detecting very hard-to-perceive patterns (hard to perceive for the majority), they also become very good at detecting patterns that aren’t there at all. For example, conspiracy theorists—the kind who creates conspiracy theories, not mere followers—are usually very high IQ individuals whose pattern-recognition went in quite wrong directions. Hence, a high IQ is, at best, a very raw measure of one’s cognitive potential, more than one’s cognitive execution. This one does require training to be turned into something actually able to accomplish great things.
Be as it may, there have been some studies on what does increase the average IQ scores for populations at large. The main factor, above everything else, is better nutrition during infancy. That helps the brain to develop without hindrances, resulting in most of those children, when they grow, being able to recognize much more patterns than peers of theirs who were malnourished in their first years. That one factor cannot be compensated for later in life. And on top of that, access to excellent education in a stable environment during one’s formative years also helps with a few extra points.
Finally, it should be noted that the effects of IQ scores are better understood (because easier to study) for lower IQs than for higher IQs. For lower IQs there are lots of correlations with anti-social behavior, criminality, impulsiveness, mental illnesses etc. For higher IQs there are correlations with mathematical prowess and having better incomes, probably because we live in a society that values professions requiring pattern recognition (engineering, law, finance, programming, anything requiring complex strategizing etc.), but not much beyond that.
If you practice for IQ tests, you’re going to become better at detecting the specific kinds of patterns used in IQ tests, but then your IQ score will correlate less with your general pattern-recognition ability, and in turn with those other traits, so at some point your score will stop reflecting your general intelligence. [...]
Are you sure of this? Maybe the sort of people who are motivated to get an high score in a IQ test are the same sort of people who are motivated to get good grades in the college, who work harder to advance their career, and so on.
To clarify, we have two possible explainations for the correlation:
A) People with a high IQ score got their high IQ score because they have a better innate capacity to detect patterns, so they are also innately more capable to become engineers or lawyers. People with a low IQ score have a low innate intelligence, so they are not able to understand that being a criminal is a bad idea.
B) People with a high IQ score got their high IQ score because they were motivated to get an high IQ score. They are also more likely to become engineers or lawyers, because they are motivated to work hard to achieve their goals. People with a low IQ score just wanted to finish this boring test as soon as possible, so they gave random answers and returned to bar drinking.
I think that a mixture of (A) and (B) may be true. Most of your answers suggest that (A) is the most relevant explanation. However, if you for example replace “IQ score” with “school grades”, I would say intuitively that (B) is the main answer. Is the IQ test fundamentally different from a school test?
A and B are only different if “motivation” is not an innate factor. But what else would it be? Is it even an actual thing, or just a name for a phenomenon, masquerading as an explanation of the phenomenon?
(A) and (B) make different predictions. If (B) is true, people with high IQ will not be particularly good at a new task when they try it for the first time—but then they would improve by application. If (A) is true, people with high IQ will be immediately good at new cognitive tasks (or, at least, much better than people with low IQ).
Are you sure of this? Maybe the sort of people who are motivated to get an high score in a IQ test are the same sort of people who are motivated to get good grades in the college, who work harder to advance their career, and so on.
This is essentially proposing a correlation between intelligence and conscientiousness. But from my reading they appear to be mostly uncorrelated.
Is the IQ test fundamentally different from a school test?
Yes. It measures an intrinsic ability, not a learned skill. I’ll make an analogy:
Suppose there was an “athletic ability” measurement score calibrated so that it can gauge, via a set of physical tests, the athletic potential of individuals. It’s devised so that a population with no specific training can take it, and the result correlates with, for example, how fast a person will be able to run if they dedicate themselves to short range training full time.
This limit, notice, is genetic. Your genes determine the structure and interconnection of your skeleton, muscles, nerves etc. and how well they all respond to diet, training regimen, stimulants, and other external factors. Hence, every person will have a range of running speed that goes from their speed when running without any specific training, let’s call this speed A, all the way up to their maximum genetically determined potential, let’s call this speed B.
The “athletic ability” scoring then, taking into account several factors tested, including your current, untrained running speed, will give you a number that, when you look at a table constructed after test with thousands of other individuals, show that your maximum speed, if you dedicate yourself completely to developing your running potential, will be B.
Now, suppose an individual, for some reason, trains day and night at short range running before taking the “athletic ability” test. Maybe they’re a teen with parents who insist they excel at the test due to, let’s say, the potential to get tuition fee reductions in college. Or maybe they have a parent who’s an running champion and they want to impress them, be up to their standards, or whatever. They thus decide to look at how the test is applied, does everything to nail it, and so, when the day comes, they take the test—in which, among other things, they run at their current speed P --, and as a result obtain a much higher score than they would have gotten otherwise. According this score, when they look at the statistical table of maximum short range running speeds, it tells them that if they begin training full time (which assumes they aren’t already training) their maximum speed B will be Q! Amazing!
So, our fictional teen continues their training as much as they can, full time, doing everything optimally! And now, years later, at the very top of their performance, their maximum speed is… Q? No. It’s a little bit above P, but nowhere near Q. Why? Because the test wasn’t devised for people who were already training, much less for people trying to game it. They gamed the test, got a higher score than they would have gotten otherwise, but their maximum genetically determined potential speed is what it is. Gaming the test won’t change their true maximum speed B no matter how much they try to skew it.
Now, suppose everyone began gaming the “athletic ability” test so that the table of maximum speeds B in light of scores didn’t correlate anymore, what would happen? Well, psychologists would analyze the new trend. They’d look at current full time professional short range runners, the scores they obtained in their “athletic ability” test when they took it a few years before, and develop a new table with updated maximum speeds B for “athletic score” abilities, so that both numbers began correlating again.
That’s how IQ works.
A high IQ person, let’s say, someone with a IQ 140, can instantly grasp novel, complex abstract concepts in a field they never studied before after barely glancing at it and hearing it explained to them one time in a summed up form that took 15 minutes to provide, and then get a 9.0 on a test without having studied it again in between. A person with an IQ 100, in contrast, might require a full class on the topic, several hours or even days of study at home, and lots of reading, to manage the same 9.0 in the same test.
If the later had gamed the IQ test so that their official score also were 140, that wouldn’t have changed the outcome of this scenario. They’d still have had to take the full class, study several hours to days at home, and do lots of reading, to get that 9.0, while the “not-gamed” IQ 140 person still required a mere 15 minutes of hearing about the topic once to score that 9.0. And, had the “not-gamed” IQ 140 decided to get a 10.0 with honors, they’d have needed to study the topic further for maybe 3 hours, they just didn’t care enough to bother.
Now, suppose everyone began gaming the “athletic ability” test so that the table of maximum speeds B in light of scores didn’t correlate anymore, what would happen? Well, psychologists would analyze the new trend. They’d look at current full time professional short range runners, the scores they obtained in their “athletic ability” test when they took it a few years before, and develop a new table with updated maximum speeds B for “athletic score” abilities, so that both numbers began correlating again.
Here you are supposing that everyone does the same amount of preparation; otherwise, recalibrating the score would not be enough. I think that this is the main point: does everyone prepare the same amount for IQ tests?
AFAIK, most don’t prepare at all since there isn’t much at stake.
Very few companies hire based on high IQ, when they do it’s usually because the problems the employee will have to deal with are highly mathematical and/or logical in nature and a person with a low (real) IQ would do really poorly in that, and in any case they still require candidates to have specific skills, which are more determinant than the IQ. And when such companies do take IQ in consideration, they usually do so not by requiring an official score, but by making candidates go through aptitude tests and puzzles, then checking how they scored in those. Very few go for a fully certified score, and when they do, they have requirements such that they may well also require a full personality evaluation, meaning a full Big 5 assessment.
On the flip side, there are jobs that have a maximum IQ score requirement, and don’t hire people above that, the reasoning being that anyone with an IQ higher than that would get utterly bored at that job and leave it on the first opportunity, thus wasting the company’s time and training investment. So they provide a test and if you get too good a score on it you’re let go.
Hence, if one were to try gaming the score, one would either end up in a job with such extreme mathematical and logical thinking requirements they would end up constantly mentally exhausted and leave, unable to cope with spending so much mental energy (and this is measurable, brain scans of high IQ individuals show their brains do very energy expenditure when dealing with complex tasks that, for average IQ individuals, cause their brains to flare up in a storm of long, constant, intense activity). Or, on the other extreme, would put them in a job with such low requirements for their abilities that it’d make them feel miserable until they in fact jumped ship for something more stimulating.
Now, one important thing to keep in mind is that IQ scores aren’t absolute values, they’re relative values based on how a population answers tests, and it follows a Gaussian distribution.
If a test has 100 questions, and 50% of those taking it get less than 60 questions right, and the other 50% get more than 60 questions right, then IQ 100 is defined as “getting 60 questions right”. If in 20 years the same test has 50% of those taking it getting less than 70 questions right, and the other 50% getting more than 70 questions right, then IQ 100 is redefined as “getting 70 questions right”. Hence, IQ 100 is always the average of a population.
Then, for numbers above and below 100, every ‘n’ points (usually 15) are defined as “one standard deviation”. Since the distribution is Gaussian, this means that IQ 85 (1 standard deviation below the mean) is defined as whatever number of questions 84.1% of respondents get right; IQ 100 (the mean) is the number of questions the aforementioned 50% of respondents get right; IQ 115 (1 standard deviation above the mean) as the number of questions only the top 15.9% of respondents get right; IQ 130 (2 standard deviations above the mean) as the number of questions only the top 2.3% of respondents get right; IQ 145 (3 standard deviations) as the number of questions only the top 0.2% of respondents get right; and so on and so forth, in both directions.
This means that, if people began gaming the score, the shape of the curve would change into a distorted Gaussian, introducing a perceptible skew that could be calculated following standard statistical procedures, which in turn would prompt a renormalization of the test so that it would track averages and standard deviations correctly once again, rendering any such effort a one time stunt.
The shape is perceptibly different from a Gaussian (at least in the distributions that I found googling “empirical distribution of IQ” and similar keywords). This is not surprising, because almost nothing in Nature is an ideal Gaussian.
Not really. Currently IQ distribution is defined as a Gaussian, so if tests are made correctly and the proper transformation is applied the shape of the curve, for a large enough population, will literally be a Gaussian “by definition”. Check this answer on Stack Exchange for details and references:
It’s designed to be a normal distribution, but actual implementations don’t work out exactly that way. For starters, the distribution is skewed rightward because brain damage is a thing and brain augmentation isn’t (yet).
I would avoid athletic metaphors for IQ. People naturally tend to assume brains work like muscles. Muscles get stronger with exercise. Brains do not get smarter with “exercise” (study/puzzles/classes/etc). The data is clear—it just doesn’t work that way.
A lot of actor’s have children that grow up to be actors because they hung out with actors growing up. A lot of politicians have children who grow up to be politicians because they hung out with politicians growing up. A lot of ex-convicts have children that grow up to be ex-convicts because they hung out with ex-convicts growing up. Access to resources, or lack of them, seem to have a huge impact on human potential. I often wonder how many of our characteristics are truly innate, and not just learned or trained. Nature or Nurture? An argument for nature undercuts the idea that education and good opportunities should be made available to everyone.
I often wonder how many of our characteristics are truly innate, and not just learned or trained.
In the case of IQ this has been well established. There’s some variance due to nurture, but the bulk of it is nature. For example, very young children adopted by high IQ couples, and raised with a focus on intellectual matters, still demonstrate an IQ much closer to that of their lower-IQ biological mother than to that of their adoptive parents.
This isn’t to say that being raised by high IQ parents has no consequences. These children learn several personal and cultural skills in an environment that nurtures their abilities, and therefore manage to, for example, obtain a bachelor’s degree with a much higher likelihood than average for for their origin groups, meaning their Big 5 “Conscientiousness” trait did grow remarkably.
In terms of their raw IQ, though, other than the increase due to better nutrition, no, nurture has no effect, unfortunately.
An argument for nature undercuts the idea that education and good opportunities should be made available to everyone.
Not really. And “is” doesn’t determine an “ought”. It can easily be argued, to the contrary, that precisely because low IQ individuals need more institutional support compared to high IQ individuals, they should receive a much better tailored education and much better vocational opportunities, as high IQ individuals are much more likely to solve what they need solved on their own without, or with bare minimum, external aid.
That’s not entirely true. Whether or not something “is” known to work or to fail often determines whether you “ought” to do it.
The IQ research cuts against the grain of our culture’s belief in equality and hard work, so nobody really likes it and even mentioning IQ in many contexts is socially dangerous. However, the IQ research is generally considered to be the strongest and most conclusive body of evidence in the social sciences. Trying to equalize the effects of IQ using education doesn’t work.
Whether or not something “is” known to work or to fail often determines whether you “ought” to do it.
Not at all. Knowing that doing X causes Y only informs that if you want result Y, the way to achieve that is by doing X. It doesn’t tell you whether Y is desirable or not.
Hence, if a society wants maximum productive efficiency, and allocating more resources to their most intelligent members is the most effective way to achieve that, then yes, allocating more resources for them, and less for less gifted individuals, is the way to go. On the flip side, if a society wants, let’s say, to maximize equality of outcomes among its members, then they’ll completely ignore that means, and look for the method that will provide that outcome.
The decision about the “ought”, then, is what truly determines which “is” will be chosen, not the other way around.
That’s true, but in the actual case society(1) wants to maximize equality of outcomes among its members, and we’ve spent decades looking for a method that will provide that outcome, and nothing we’ve come up with works(2). You might think we “ought” to be doing that, but the judgement is now between “we should continue to pursue this value, knowing that it’s never worked before and we have no reason to believe that it will start working any time soon” and “we should pursue other values that seem to be achievable”—which is a very different judgement.
(1) or the segment thereof that controls education policy
(2) There are a lot of marginal claims of questionable statistical and practical significance that never seem to scale up, but “nothing works” is a reasonable summary.
Roughly speaking, the IQ score measures one’s ability to recognize patterns, so it isn’t a direct measurement of intelligence per se, but of an ability that correlates strongly with several other abilities that people associate with the much fuzzier concept of intelligence.
If you practice for IQ tests, you’re going to become better at detecting the specific kinds of patterns used in IQ tests, but then your IQ score will correlate less with your general pattern-recognition ability, and in turn with those other traits, so at some point your score will stop reflecting your general intelligence.
To increase your intelligence as a whole you’d have to become better at recognizing more and more complex patterns in general, and not only for when you’re focusing on problems, but on the automatic, as a passive ability. That would require quite a lot of cerebral plasticity, which is something adults almost universally lack.
Now, having a great pattern recognition, and by extension a high IQ score, by itself, doesn’t suffice to say someone is actually intelligent in a broad sense, because when one is very good at detecting very hard-to-perceive patterns (hard to perceive for the majority), they also become very good at detecting patterns that aren’t there at all. For example, conspiracy theorists—the kind who creates conspiracy theories, not mere followers—are usually very high IQ individuals whose pattern-recognition went in quite wrong directions. Hence, a high IQ is, at best, a very raw measure of one’s cognitive potential, more than one’s cognitive execution. This one does require training to be turned into something actually able to accomplish great things.
Be as it may, there have been some studies on what does increase the average IQ scores for populations at large. The main factor, above everything else, is better nutrition during infancy. That helps the brain to develop without hindrances, resulting in most of those children, when they grow, being able to recognize much more patterns than peers of theirs who were malnourished in their first years. That one factor cannot be compensated for later in life. And on top of that, access to excellent education in a stable environment during one’s formative years also helps with a few extra points.
Finally, it should be noted that the effects of IQ scores are better understood (because easier to study) for lower IQs than for higher IQs. For lower IQs there are lots of correlations with anti-social behavior, criminality, impulsiveness, mental illnesses etc. For higher IQs there are correlations with mathematical prowess and having better incomes, probably because we live in a society that values professions requiring pattern recognition (engineering, law, finance, programming, anything requiring complex strategizing etc.), but not much beyond that.
Thank you for your answer!
Are you sure of this? Maybe the sort of people who are motivated to get an high score in a IQ test are the same sort of people who are motivated to get good grades in the college, who work harder to advance their career, and so on.
To clarify, we have two possible explainations for the correlation:
A) People with a high IQ score got their high IQ score because they have a better innate capacity to detect patterns, so they are also innately more capable to become engineers or lawyers. People with a low IQ score have a low innate intelligence, so they are not able to understand that being a criminal is a bad idea.
B) People with a high IQ score got their high IQ score because they were motivated to get an high IQ score. They are also more likely to become engineers or lawyers, because they are motivated to work hard to achieve their goals. People with a low IQ score just wanted to finish this boring test as soon as possible, so they gave random answers and returned to bar drinking.
I think that a mixture of (A) and (B) may be true. Most of your answers suggest that (A) is the most relevant explanation. However, if you for example replace “IQ score” with “school grades”, I would say intuitively that (B) is the main answer. Is the IQ test fundamentally different from a school test?
A and B are only different if “motivation” is not an innate factor. But what else would it be? Is it even an actual thing, or just a name for a phenomenon, masquerading as an explanation of the phenomenon?
(A) and (B) make different predictions. If (B) is true, people with high IQ will not be particularly good at a new task when they try it for the first time—but then they would improve by application. If (A) is true, people with high IQ will be immediately good at new cognitive tasks (or, at least, much better than people with low IQ).
This is essentially proposing a correlation between intelligence and conscientiousness. But from my reading they appear to be mostly uncorrelated.
Yes. It measures an intrinsic ability, not a learned skill. I’ll make an analogy:
Suppose there was an “athletic ability” measurement score calibrated so that it can gauge, via a set of physical tests, the athletic potential of individuals. It’s devised so that a population with no specific training can take it, and the result correlates with, for example, how fast a person will be able to run if they dedicate themselves to short range training full time.
This limit, notice, is genetic. Your genes determine the structure and interconnection of your skeleton, muscles, nerves etc. and how well they all respond to diet, training regimen, stimulants, and other external factors. Hence, every person will have a range of running speed that goes from their speed when running without any specific training, let’s call this speed A, all the way up to their maximum genetically determined potential, let’s call this speed B.
The “athletic ability” scoring then, taking into account several factors tested, including your current, untrained running speed, will give you a number that, when you look at a table constructed after test with thousands of other individuals, show that your maximum speed, if you dedicate yourself completely to developing your running potential, will be B.
Now, suppose an individual, for some reason, trains day and night at short range running before taking the “athletic ability” test. Maybe they’re a teen with parents who insist they excel at the test due to, let’s say, the potential to get tuition fee reductions in college. Or maybe they have a parent who’s an running champion and they want to impress them, be up to their standards, or whatever. They thus decide to look at how the test is applied, does everything to nail it, and so, when the day comes, they take the test—in which, among other things, they run at their current speed P --, and as a result obtain a much higher score than they would have gotten otherwise. According this score, when they look at the statistical table of maximum short range running speeds, it tells them that if they begin training full time (which assumes they aren’t already training) their maximum speed B will be Q! Amazing!
So, our fictional teen continues their training as much as they can, full time, doing everything optimally! And now, years later, at the very top of their performance, their maximum speed is… Q? No. It’s a little bit above P, but nowhere near Q. Why? Because the test wasn’t devised for people who were already training, much less for people trying to game it. They gamed the test, got a higher score than they would have gotten otherwise, but their maximum genetically determined potential speed is what it is. Gaming the test won’t change their true maximum speed B no matter how much they try to skew it.
Now, suppose everyone began gaming the “athletic ability” test so that the table of maximum speeds B in light of scores didn’t correlate anymore, what would happen? Well, psychologists would analyze the new trend. They’d look at current full time professional short range runners, the scores they obtained in their “athletic ability” test when they took it a few years before, and develop a new table with updated maximum speeds B for “athletic score” abilities, so that both numbers began correlating again.
That’s how IQ works.
A high IQ person, let’s say, someone with a IQ 140, can instantly grasp novel, complex abstract concepts in a field they never studied before after barely glancing at it and hearing it explained to them one time in a summed up form that took 15 minutes to provide, and then get a 9.0 on a test without having studied it again in between. A person with an IQ 100, in contrast, might require a full class on the topic, several hours or even days of study at home, and lots of reading, to manage the same 9.0 in the same test.
If the later had gamed the IQ test so that their official score also were 140, that wouldn’t have changed the outcome of this scenario. They’d still have had to take the full class, study several hours to days at home, and do lots of reading, to get that 9.0, while the “not-gamed” IQ 140 person still required a mere 15 minutes of hearing about the topic once to score that 9.0. And, had the “not-gamed” IQ 140 decided to get a 10.0 with honors, they’d have needed to study the topic further for maybe 3 hours, they just didn’t care enough to bother.
Here you are supposing that everyone does the same amount of preparation; otherwise, recalibrating the score would not be enough. I think that this is the main point: does everyone prepare the same amount for IQ tests?
AFAIK, most don’t prepare at all since there isn’t much at stake.
Very few companies hire based on high IQ, when they do it’s usually because the problems the employee will have to deal with are highly mathematical and/or logical in nature and a person with a low (real) IQ would do really poorly in that, and in any case they still require candidates to have specific skills, which are more determinant than the IQ. And when such companies do take IQ in consideration, they usually do so not by requiring an official score, but by making candidates go through aptitude tests and puzzles, then checking how they scored in those. Very few go for a fully certified score, and when they do, they have requirements such that they may well also require a full personality evaluation, meaning a full Big 5 assessment.
On the flip side, there are jobs that have a maximum IQ score requirement, and don’t hire people above that, the reasoning being that anyone with an IQ higher than that would get utterly bored at that job and leave it on the first opportunity, thus wasting the company’s time and training investment. So they provide a test and if you get too good a score on it you’re let go.
Hence, if one were to try gaming the score, one would either end up in a job with such extreme mathematical and logical thinking requirements they would end up constantly mentally exhausted and leave, unable to cope with spending so much mental energy (and this is measurable, brain scans of high IQ individuals show their brains do very energy expenditure when dealing with complex tasks that, for average IQ individuals, cause their brains to flare up in a storm of long, constant, intense activity). Or, on the other extreme, would put them in a job with such low requirements for their abilities that it’d make them feel miserable until they in fact jumped ship for something more stimulating.
Now, one important thing to keep in mind is that IQ scores aren’t absolute values, they’re relative values based on how a population answers tests, and it follows a Gaussian distribution.
If a test has 100 questions, and 50% of those taking it get less than 60 questions right, and the other 50% get more than 60 questions right, then IQ 100 is defined as “getting 60 questions right”. If in 20 years the same test has 50% of those taking it getting less than 70 questions right, and the other 50% getting more than 70 questions right, then IQ 100 is redefined as “getting 70 questions right”. Hence, IQ 100 is always the average of a population.
Then, for numbers above and below 100, every ‘n’ points (usually 15) are defined as “one standard deviation”. Since the distribution is Gaussian, this means that IQ 85 (1 standard deviation below the mean) is defined as whatever number of questions 84.1% of respondents get right; IQ 100 (the mean) is the number of questions the aforementioned 50% of respondents get right; IQ 115 (1 standard deviation above the mean) as the number of questions only the top 15.9% of respondents get right; IQ 130 (2 standard deviations above the mean) as the number of questions only the top 2.3% of respondents get right; IQ 145 (3 standard deviations) as the number of questions only the top 0.2% of respondents get right; and so on and so forth, in both directions.
This means that, if people began gaming the score, the shape of the curve would change into a distorted Gaussian, introducing a perceptible skew that could be calculated following standard statistical procedures, which in turn would prompt a renormalization of the test so that it would track averages and standard deviations correctly once again, rendering any such effort a one time stunt.
The shape is perceptibly different from a Gaussian (at least in the distributions that I found googling “empirical distribution of IQ” and similar keywords). This is not surprising, because almost nothing in Nature is an ideal Gaussian.
Not really. Currently IQ distribution is defined as a Gaussian, so if tests are made correctly and the proper transformation is applied the shape of the curve, for a large enough population, will literally be a Gaussian “by definition”. Check this answer on Stack Exchange for details and references:
Wood, Why are IQ test results normally distributed?, URL (version: 2019-12-23)
Now, evidently, for smaller sub-samples of the population the shape will vary.
It’s designed to be a normal distribution, but actual implementations don’t work out exactly that way. For starters, the distribution is skewed rightward because brain damage is a thing and brain augmentation isn’t (yet).
I would avoid athletic metaphors for IQ. People naturally tend to assume brains work like muscles. Muscles get stronger with exercise. Brains do not get smarter with “exercise” (study/puzzles/classes/etc). The data is clear—it just doesn’t work that way.
A lot of actor’s have children that grow up to be actors because they hung out with actors growing up. A lot of politicians have children who grow up to be politicians because they hung out with politicians growing up. A lot of ex-convicts have children that grow up to be ex-convicts because they hung out with ex-convicts growing up. Access to resources, or lack of them, seem to have a huge impact on human potential. I often wonder how many of our characteristics are truly innate, and not just learned or trained. Nature or Nurture? An argument for nature undercuts the idea that education and good opportunities should be made available to everyone.
In the case of IQ this has been well established. There’s some variance due to nurture, but the bulk of it is nature. For example, very young children adopted by high IQ couples, and raised with a focus on intellectual matters, still demonstrate an IQ much closer to that of their lower-IQ biological mother than to that of their adoptive parents.
This isn’t to say that being raised by high IQ parents has no consequences. These children learn several personal and cultural skills in an environment that nurtures their abilities, and therefore manage to, for example, obtain a bachelor’s degree with a much higher likelihood than average for for their origin groups, meaning their Big 5 “Conscientiousness” trait did grow remarkably.
In terms of their raw IQ, though, other than the increase due to better nutrition, no, nurture has no effect, unfortunately.
Not really. And “is” doesn’t determine an “ought”. It can easily be argued, to the contrary, that precisely because low IQ individuals need more institutional support compared to high IQ individuals, they should receive a much better tailored education and much better vocational opportunities, as high IQ individuals are much more likely to solve what they need solved on their own without, or with bare minimum, external aid.
And “is” doesn’t determine an “ought”.
That’s not entirely true. Whether or not something “is” known to work or to fail often determines whether you “ought” to do it.
The IQ research cuts against the grain of our culture’s belief in equality and hard work, so nobody really likes it and even mentioning IQ in many contexts is socially dangerous. However, the IQ research is generally considered to be the strongest and most conclusive body of evidence in the social sciences. Trying to equalize the effects of IQ using education doesn’t work.
Not at all. Knowing that doing X causes Y only informs that if you want result Y, the way to achieve that is by doing X. It doesn’t tell you whether Y is desirable or not.
Hence, if a society wants maximum productive efficiency, and allocating more resources to their most intelligent members is the most effective way to achieve that, then yes, allocating more resources for them, and less for less gifted individuals, is the way to go. On the flip side, if a society wants, let’s say, to maximize equality of outcomes among its members, then they’ll completely ignore that means, and look for the method that will provide that outcome.
The decision about the “ought”, then, is what truly determines which “is” will be chosen, not the other way around.
That’s true, but in the actual case society(1) wants to maximize equality of outcomes among its members, and we’ve spent decades looking for a method that will provide that outcome, and nothing we’ve come up with works(2). You might think we “ought” to be doing that, but the judgement is now between “we should continue to pursue this value, knowing that it’s never worked before and we have no reason to believe that it will start working any time soon” and “we should pursue other values that seem to be achievable”—which is a very different judgement.
(1) or the segment thereof that controls education policy
(2) There are a lot of marginal claims of questionable statistical and practical significance that never seem to scale up, but “nothing works” is a reasonable summary.