No, this is an unmitigated triumph. It’s amazing how people take such a negative view of this.
So let me get this straight: over the past few decades we have slowly moved from a viewpoint where Gould is a saint, intelligence doesn’t exist and has no predictive value since it’s a racist made-up concept promoted by incompetent hacks and it has no genetic component and definitely nothing which could possibly differ between any groups at all, to a viewpoint where the validity of intelligence tests in multiple senses have been shown, the amount of genetic contribution has been accurately estimated, the architecture nailed down as highly polygenic & additive, the likely number of variants, and we’ve started accumulating the sample size to start detecting variants, and not just have we detected 60+ variants with >90% probability* (see the remarks on the Bayesian posterior probability in the supplementary material), we even have 3 which pass the usual (moronic, arbitrary, unjustified) statistical-significance thresholds—and wait, there’s more, they also predict IQ out of sample and many of the implicated variants are known to relate to the central nervous system! - and this is a disappointment where ‘we still have no idea’ and the findings are ‘maddeningly small’ with ‘inconclusive findings’?
* which imply you can predict much better than the article’s calculation of 1.8 points
You’ve got to be kidding me. Or is this how zeitgeists change? They get walked back step by step and people pretend nothing has changed? When the tests are shown to be unbiased and predictive, we stop talking about them; when the twin studies show in every variant genetic influences on intelligence, we talk about how very difficult causal inference is and how twin studies can go wrong; when genetics comes up, suddenly everyone is discussing how nonadditive and gene-environment effects will make identification impossible (never mind that there’s no reason to expect them to be large parts of the genetics); when good genetic candidates are found which don’t pass arbitrary thresholds, that’s taken as evidence they don’t exist and genetic influence is irrelevant; and when enough samples are taken to satisfy that, then each of the hits is now deprecated as small and irrelevant? And the changes and refutations quietly go down the memory hole. ‘Of course some of intelligence is genetic, everyone knows that—but all the variants are small, so really, this changes nothing at all.’
No, the Rietveld papers this year and last were historic triumphs. The theory has been as proven as it needs to be. The fundamental points no longer need to be debated—the debate is over. In some respects, it’s now a pretty boring topic.
All that’s left is engineering and application: getting enough samples to infer the rest to sufficiently high posterior probabilities to make good-enough predictions, and exploiting new possibilities like embryo-selection.
No, this is an unmitigated triumph. It’s amazing how people take such a negative view of this.
We’re are looking at this in different context and are using different baselines.
You are talking about how long ago we started with the genetic component of intelligence being malicious fantasies of evil people and now it’s just science. Sure (though you still can’t discuss it publicly). I’m talking about this particular paper and how big of a step it is compared to, say, a couple of years ago.
My baseline is much more narrow and technical. It is “we look at the the genome of a baby and have no idea what will be its IQ when it grows up”. That is still largely the case and the paper’s ability to forecast does not look impressive to me.
The fact that intelligence is largely genetic and highly polygenic is already “normal” for me—my attitude is “yeah, sure, we know this, what have you done for me lately”.
I appreciate the historical context which we are not free of by any stretch of imagination (so, no, I don’t see unmitigated triumphs), but I was not commenting on progress over the last half a century. I want out-of-sample predictions with noticeable magnitude and I think getting there will take a bit more than just engineering.
My baseline is much more narrow and technical. It is “we look at the the genome of a baby and have no idea what will be its IQ when it grows up”. That is still largely the case and the paper’s ability to forecast does not look impressive to me.
This paper validates the approach (something a lot of people, for a lot of different reasons, were skeptical of), and even on its own merits we still get some predictive power out of it: the 3 top hits cover a range of ~1.5 points, and the 69 variants with 90% confidence predict even more. (I’m not sure how much since they don’t bother to use all their data, but if we assume the 69 are evenly distributed between 0-0.5 points, then the mean is 0.25 and the total predictive power is more than a few points.)
What use is this result? Well, what use is a new-born baby? As the cryptographers say, ‘attacks only get better’.
I think getting there will take a bit more than just engineering.
And, uh, why would you think that? There’s no secret sauce here. Just take a lot of samples and run a regression. I don’t think they even used anything particularly complex like a lasso or elastic net.
There’s no secret sauce here. Just take a lot of samples and run a regression.
Pretend for a second it’s a nutrition study and apply your usual scepticism :-) You know quite well that “just run a regression” is, um… rarely that simple.
To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.
Pretend for a second it’s a nutrition study and apply your usual scepticism :-) You know quite well that “just run a regression” is, um… rarely that simple.
No, that’s the great thing about genetic associations! First, genes don’t change over a lifetime, so every association is in effect a longitudinal study where the arrow of time immediately rules out A<-B or reverse causation in which IQ somehow causes particular variants to be overrepresented; that takes out one of the three causal pathways. Then you’re left with confounding—but there’s almost no way for a third variable to pick out people with particular alleles and grant them higher intelligence, no greenbeard effect, and population differences are dealt with by using relatively homogenous samples & controlling for principal components—so you don’t have to worry much about A<-C->B. So all you’re left with is A->B.
To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.
But they’re not. They’re not a large part of what’s going on. And they don’t affect the associations you find through a straight analysis looking for additive effects.
The known unknowns have tended to end up lower in complexity than we’ve predicted. But unknown unknowns continue to blindside us, unabated, adding to the total complexity of the human body.
I don’t think the outside view is relevant here. We have coming up on a century of twin studies and behavioral genetics and very motivated people coming up with possibilities for problems, and so far the traditional estimates are looking pretty good: for example, when people go and look at genetics directly, the estimates for simple additive heritability look very similar to the traditional estimates. The other day offered an example of a SNP study confirming the estimates from twin studies, “Substantial SNP-based heritability estimates for working memory performance”, Vogler et al 2014. If all these complexities were real and serious problems and the Outside View advises us to be skeptical, why do we keep finding the SNP/GCTA estimates look exactly like we would have predicted?
Ok, I confess I have no idea what SNP and GCTA are. As for the study Lumifer linked to, Razib Khan’s analysis of it is that it suggests intelligence is a complex polygenetic trait. This should not be surprising as it is certainly an extremely complex trait in terms of phenotype.
Searching for genes that make people smart—we still have no idea...
No, this is an unmitigated triumph. It’s amazing how people take such a negative view of this.
So let me get this straight: over the past few decades we have slowly moved from a viewpoint where Gould is a saint, intelligence doesn’t exist and has no predictive value since it’s a racist made-up concept promoted by incompetent hacks and it has no genetic component and definitely nothing which could possibly differ between any groups at all, to a viewpoint where the validity of intelligence tests in multiple senses have been shown, the amount of genetic contribution has been accurately estimated, the architecture nailed down as highly polygenic & additive, the likely number of variants, and we’ve started accumulating the sample size to start detecting variants, and not just have we detected 60+ variants with >90% probability* (see the remarks on the Bayesian posterior probability in the supplementary material), we even have 3 which pass the usual (moronic, arbitrary, unjustified) statistical-significance thresholds—and wait, there’s more, they also predict IQ out of sample and many of the implicated variants are known to relate to the central nervous system! - and this is a disappointment where ‘we still have no idea’ and the findings are ‘maddeningly small’ with ‘inconclusive findings’?
* which imply you can predict much better than the article’s calculation of 1.8 points
You’ve got to be kidding me. Or is this how zeitgeists change? They get walked back step by step and people pretend nothing has changed? When the tests are shown to be unbiased and predictive, we stop talking about them; when the twin studies show in every variant genetic influences on intelligence, we talk about how very difficult causal inference is and how twin studies can go wrong; when genetics comes up, suddenly everyone is discussing how nonadditive and gene-environment effects will make identification impossible (never mind that there’s no reason to expect them to be large parts of the genetics); when good genetic candidates are found which don’t pass arbitrary thresholds, that’s taken as evidence they don’t exist and genetic influence is irrelevant; and when enough samples are taken to satisfy that, then each of the hits is now deprecated as small and irrelevant? And the changes and refutations quietly go down the memory hole. ‘Of course some of intelligence is genetic, everyone knows that—but all the variants are small, so really, this changes nothing at all.’
No, the Rietveld papers this year and last were historic triumphs. The theory has been as proven as it needs to be. The fundamental points no longer need to be debated—the debate is over. In some respects, it’s now a pretty boring topic.
All that’s left is engineering and application: getting enough samples to infer the rest to sufficiently high posterior probabilities to make good-enough predictions, and exploiting new possibilities like embryo-selection.
We’re are looking at this in different context and are using different baselines.
You are talking about how long ago we started with the genetic component of intelligence being malicious fantasies of evil people and now it’s just science. Sure (though you still can’t discuss it publicly). I’m talking about this particular paper and how big of a step it is compared to, say, a couple of years ago.
My baseline is much more narrow and technical. It is “we look at the the genome of a baby and have no idea what will be its IQ when it grows up”. That is still largely the case and the paper’s ability to forecast does not look impressive to me.
The fact that intelligence is largely genetic and highly polygenic is already “normal” for me—my attitude is “yeah, sure, we know this, what have you done for me lately”.
I appreciate the historical context which we are not free of by any stretch of imagination (so, no, I don’t see unmitigated triumphs), but I was not commenting on progress over the last half a century. I want out-of-sample predictions with noticeable magnitude and I think getting there will take a bit more than just engineering.
This paper validates the approach (something a lot of people, for a lot of different reasons, were skeptical of), and even on its own merits we still get some predictive power out of it: the 3 top hits cover a range of ~1.5 points, and the 69 variants with 90% confidence predict even more. (I’m not sure how much since they don’t bother to use all their data, but if we assume the 69 are evenly distributed between 0-0.5 points, then the mean is 0.25 and the total predictive power is more than a few points.)
What use is this result? Well, what use is a new-born baby? As the cryptographers say, ‘attacks only get better’.
And, uh, why would you think that? There’s no secret sauce here. Just take a lot of samples and run a regression. I don’t think they even used anything particularly complex like a lasso or elastic net.
Pretend for a second it’s a nutrition study and apply your usual scepticism :-) You know quite well that “just run a regression” is, um… rarely that simple.
To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.
No, that’s the great thing about genetic associations! First, genes don’t change over a lifetime, so every association is in effect a longitudinal study where the arrow of time immediately rules out A<-B or reverse causation in which IQ somehow causes particular variants to be overrepresented; that takes out one of the three causal pathways. Then you’re left with confounding—but there’s almost no way for a third variable to pick out people with particular alleles and grant them higher intelligence, no greenbeard effect, and population differences are dealt with by using relatively homogenous samples & controlling for principal components—so you don’t have to worry much about A<-C->B. So all you’re left with is A->B.
But they’re not. They’re not a large part of what’s going on. And they don’t affect the associations you find through a straight analysis looking for additive effects.
But their expression does.
How do you know?
An expression in circumstances dictated by what genes one started with.
Because if they were a large part of what was going on, the estimates would not break down cleanly and the methods work so well.
Keep in mind that the outside view of biological complexity is that
Or to phrase this another way:
I don’t think the outside view is relevant here. We have coming up on a century of twin studies and behavioral genetics and very motivated people coming up with possibilities for problems, and so far the traditional estimates are looking pretty good: for example, when people go and look at genetics directly, the estimates for simple additive heritability look very similar to the traditional estimates. The other day offered an example of a SNP study confirming the estimates from twin studies, “Substantial SNP-based heritability estimates for working memory performance”, Vogler et al 2014. If all these complexities were real and serious problems and the Outside View advises us to be skeptical, why do we keep finding the SNP/GCTA estimates look exactly like we would have predicted?
Ok, I confess I have no idea what SNP and GCTA are. As for the study Lumifer linked to, Razib Khan’s analysis of it is that it suggests intelligence is a complex polygenetic trait. This should not be surprising as it is certainly an extremely complex trait in terms of phenotype.