The main bit of this episode that stuck with me was the reframing of growth mindset (see SSC’s commentary on it). Roughly, Vervaeke’s story is that the growth mindset studies are impressive (I think he’s a little too credulous but w/e), but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable. [If there’s a way to turn effort into having a higher g, we haven’t found it, despite lots of looking.] But when we split cognition into intelligence and rationality, it seems pretty obvious that it’s possible to turn effort into increased rationality, and growth mindset seems quite appropriate there.
but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable.
Is this true? Having looked into it, it doesn’t seem super true. Like, my guess is IQ is about as variable as competence measurements of most diverse skills. You can’t easily run any “did this intervention increase IQ?” studies, because IQ-tests are highly game-able, so we don’t actually have any specific studies of real interventions on this topic.
My current guess is that you can totally just increase IQ in a general sense, not many people do it because it requires deliberate practice, and I am kind of frustrated at everyone saying it’s fixed. The retest correlation of IQ is only like 0.8 after 20 years! That’s likely less than your retest correlation for basketball skills, or music instrument playing, or any of the other skills we think of as highly trainable. Of course, it’s less clear how to train IQ since we have less obvious feedback mechanisms, but I just don’t get where this myth of IQ being unchangeable comes from. We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
I’m not sure which claim this is, but I think in general the ability to game IQ tests is what they’re trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don’t think this is that relevant for population-level comparisons.]
The retest correlation of IQ is only like 0.8 after 20 years!
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the ‘fadeout’ effect that the effect of those interventions are short-lived. I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between ‘fluid intelligence’ and ‘crystallized intelligence’, where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one’s cohort.
That’s almost certainly much less than your retest correlation for basketball skills
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major ‘inborn’ variation that should show up in the statistics over time.]
We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it’s probably quite small (on fluid intelligence, at least).
I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
Yep. 0.8 is retest correlation among adults. Also, like, I don’t know of any big studies that tried to increase adult IQ with anything that doesn’t seem like it’s just obviously going to fail. There are lots of “here is a cheap intervention we can run for $50 per participant”, but those obviously don’t work for any task that already has substantial training time invested in it, or covers a large battery of tests.
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements.
Yep, definitely not just education. Also lots of other factors.
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e).
One of the problems here is that IQ is age-normalized. In absolute terms you are actually almost always seeing very substantial subcomponent drift and changes, the way they change just tend to be correlated among different individuals (i.e. people go through changing in similar ways at the same age). This exaggerates any retest-correlations compared to a thing like a basketball test, which wouldn’t be age-normalized.
To make my epistemic state here a bit more clear: I do think IQ is clearly less trainable than much narrower skills like “how many numbers can you memorize in a row?”. But I don’t think IQ is less trainable than any other set of complicated skills like “programming skill” or “architecture design” skill.
My current guess is that if you control for people who know how to program and you run a research program with about as much sophistication as current IQ studies on “can we improve people’s programming skills” you would find results that are about as convincing saying “no, you can’t improve people’s programming skill”. But this seems pretty dumb to me. We know of many groups that have substantially outperformed other groups in programming skill, and my inside-view here totally outweighs the relatively weak outside-view from the mediocre studies we are running. I also bet you would find that programming skill is really highly heritable (probably more heritable than IQ), and then people would go around saying that programming skill is genetic and can’t be changed, because everyone keeps confusing heritability with genetics and it’s terrible.
This doesn’t mean increasing programming skill is easy. It actually seems kind of hard, but it also doesn’t seem impossible, and from the perspective of a private individual “getting better at programming” is a totally reasonable thing to do, even if “make a large group of people much better at programming” is a really hard thing to do that I don’t have a ton of traction on. I feel similarly about IQ. “Getting better at whatever IQ tests are measuring” is a pretty reasonable thing to do. “Design a large scale scalable intervention that makes everyone much better” is much harder and I have much less traction on that.
The main bit of this episode that stuck with me was the reframing of growth mindset (see SSC’s commentary on it). Roughly, Vervaeke’s story is that the growth mindset studies are impressive (I think he’s a little too credulous but w/e), but also the evidence that intelligence (in the sense of IQ) is fixed is quite strong, and so having growth mindset about it is untenable. [If there’s a way to turn effort into having a higher g, we haven’t found it, despite lots of looking.] But when we split cognition into intelligence and rationality, it seems pretty obvious that it’s possible to turn effort into increased rationality, and growth mindset seems quite appropriate there.
Is this true? Having looked into it, it doesn’t seem super true. Like, my guess is IQ is about as variable as competence measurements of most diverse skills. You can’t easily run any “did this intervention increase IQ?” studies, because IQ-tests are highly game-able, so we don’t actually have any specific studies of real interventions on this topic.
My current guess is that you can totally just increase IQ in a general sense, not many people do it because it requires deliberate practice, and I am kind of frustrated at everyone saying it’s fixed. The retest correlation of IQ is only like 0.8 after 20 years! That’s likely less than your retest correlation for basketball skills, or music instrument playing, or any of the other skills we think of as highly trainable. Of course, it’s less clear how to train IQ since we have less obvious feedback mechanisms, but I just don’t get where this myth of IQ being unchangeable comes from. We’ve even seen massive changes in population-wide IQ studies that correlate heavily with educational interventions in the form of the Flynn effect.
I’m not sure which claim this is, but I think in general the ability to game IQ tests is what they’re trying to test. [Obviously tests that cover more subskills will be more robust than tests that cover fewer subskills, performance on test day can be impacted by various negative factors that some people are more able to avoid than others, etc., but I don’t think this is that relevant for population-level comparisons.]
So, note that there are roughly three stages: childhood, early adulthood, and late adulthood. We know of lots of interventions that increase childhood IQ, and also of the ‘fadeout’ effect that the effect of those interventions are short-lived. I don’t think there are that many that reliably affect adult IQ, and what we’re interested in is the retest correlation of IQ among adults.
In adulthood, things definitely change: generally for the worse. People make a big distinction between ‘fluid intelligence’ and ‘crystallized intelligence’, where fluid intelligence declines with age and crystallized intelligence increases (older people learn more slowly but know more facts and have more skills). What would be interesting (to me, at least) are increases (or slower decreases) on non-age-adjusted IQ scores. Variability on 20-year retest correlation could pretty easily be caused by aging more or less slowly than one’s cohort.
Hard to say, actually; I think the instantaneous retest correlation is higher for IQ tests than it is for basketball skill tests (according to a quick glance at some studies), and I haven’t yet found tests applied before and after an intervention (like a semester on a basketball team or w/e). We could get a better sense of this by looking at Elo scores over time for chessplayers, perhaps? [Chess is widely seen as trainable, and yet also has major ‘inborn’ variation that should show up in the statistics over time.]
Lynn is pretty sure it’s not just education, as children before they enter school show the same sorts of improvements. This could, of course, still have education as an indirect cause, where (previous) education is intervening on the parents, and I personally would be surprised if education had no impact here, but I think it’s probably quite small (on fluid intelligence, at least).
Yep. 0.8 is retest correlation among adults. Also, like, I don’t know of any big studies that tried to increase adult IQ with anything that doesn’t seem like it’s just obviously going to fail. There are lots of “here is a cheap intervention we can run for $50 per participant”, but those obviously don’t work for any task that already has substantial training time invested in it, or covers a large battery of tests.
Yep, definitely not just education. Also lots of other factors.
One of the problems here is that IQ is age-normalized. In absolute terms you are actually almost always seeing very substantial subcomponent drift and changes, the way they change just tend to be correlated among different individuals (i.e. people go through changing in similar ways at the same age). This exaggerates any retest-correlations compared to a thing like a basketball test, which wouldn’t be age-normalized.
To make my epistemic state here a bit more clear: I do think IQ is clearly less trainable than much narrower skills like “how many numbers can you memorize in a row?”. But I don’t think IQ is less trainable than any other set of complicated skills like “programming skill” or “architecture design” skill.
My current guess is that if you control for people who know how to program and you run a research program with about as much sophistication as current IQ studies on “can we improve people’s programming skills” you would find results that are about as convincing saying “no, you can’t improve people’s programming skill”. But this seems pretty dumb to me. We know of many groups that have substantially outperformed other groups in programming skill, and my inside-view here totally outweighs the relatively weak outside-view from the mediocre studies we are running. I also bet you would find that programming skill is really highly heritable (probably more heritable than IQ), and then people would go around saying that programming skill is genetic and can’t be changed, because everyone keeps confusing heritability with genetics and it’s terrible.
This doesn’t mean increasing programming skill is easy. It actually seems kind of hard, but it also doesn’t seem impossible, and from the perspective of a private individual “getting better at programming” is a totally reasonable thing to do, even if “make a large group of people much better at programming” is a really hard thing to do that I don’t have a ton of traction on. I feel similarly about IQ. “Getting better at whatever IQ tests are measuring” is a pretty reasonable thing to do. “Design a large scale scalable intervention that makes everyone much better” is much harder and I have much less traction on that.
I think laying out your thoughts on this would make a great top-level post. Starting from your comments here and then adding a bit more detail.
Do you happen to remember the source for this? I’m having trouble finding any studies that seem to bear directly on the question.