I’m not a biologist, but am I right in thinking that Crispr could be the most important human innovation ever? This Wired article claims that a knowledgeable scientists thinks that the “off-target mutations are already a solved problem.” Within a decade we should know a lot about the genetic basis of intelligence. Wouldn’t it then probably be easy to create embryos that give birth to extremely smart people, far smarter than have ever existed?
Emphasis on probably—intelligence is not a simple matter, and it is unclear that our genome, even if we clearly identify all relevant factors, would be “open ended”—that is to say, there may be a difference between “making you as smart as you can be” and “making you smarter than any human ever”. As a poor analogy, we will certainly soon be able to make humans taller, but there may be limits to how tall a human can be without important system failures; we have already had very tall people, and even if we do want to breed for tall, we might choose to top out at 7′0″ for health reasons. Likewise, when you think of smart people, it may be that you are thinking of people with skills maximized for specific functions at the cost of other functions, and a balanced intelligence might top out at some level… at least until we get past mapping what we have and into the much harder task of designing new types of genomes.
I’m not a biologist, but am I right in thinking that Crispr could be the most important human innovation ever?
There are several competing techniques. People who use the other techniques think that CRISPR mostly has better PR, and is a fairly minor technical innovation. Gene editing, regardless of the technique involved, will be tremendously important for the next few decades.
When they say “a solved problem” they mean that the cost of off-target mutations is worth it for a single high-value edit. It is unlikely that most genomes have the option of a high-value edit to improve intelligence. It’s probably more like 1 IQ point per edit. Of course, accuracy in 10 years will be better than accuracy now. In fact, if we truly had no off-target mutations, we could act now, without knowing the structure of intelligence, just by “spell-checking”—correcting rare variants.
just by “spell-checking”—correcting rare variants.
Yes, that’s Greg Cochran’s theory. I wonder by how much this could increase IQ? If I were a young billionaire I would be planning to create a clone of myself that didn’t have rare variants.
We don’t know. Cochran’s theory is not well backed by evidence at this point. Most of it is quite indirect like the attempts at quantifying paternal age effects. Emil didn’t turn up anything when I asked the other day. Some of the studies which come to mind which don’t support the idea that mutation load matters much:
Sure they are. The mutations involved in mutation load are, almost by definition, rare; if they were singly or in aggregate with large effects, that should show up in surveys on the high end. Instead, we just get that with the low end, which is consistent with there being occasional very rare or de novo mutations which can drastically reduce below average, but not that they will increase multiple or many SDs for the above average who already escaped the retardation-bullets.
If there were aggregate effects, how would they show up in Spain and Shakeshaft? Just going by the abstract, Spain is looking for genes where the rare variant has a positive effect. That is the opposite of the mutational load theory, and they don’t find any. I think Shakeshaft reaches the same conclusion by pedigree analysis.
Say that there are 10k genes, MAF=0.01, each worth 1.5 IQ points. What would Spain detect? If the TIP population is 10/3σ,* then these 10k genes each appear as the mutant 2⁄3 as often, 20 hits, rather than the expected 30. That’s a 2 sigma event. So if an oracle gave you this list of 10k genes, you could use Spain to confirm it. But if you have to find the list, it’s harder. They should expect 5k false positives among the 200k variants that they tested. If all of the true genes were among the 200k, there would be 15k hits rather than the expected 5k, confirming the theory. But with poor coverage, the true hits might be lost in the noise. And even if they have good coverage, they have restricted to non-synonymous protein coding mutations.
Moreover that model is what Steve Hsu believes, not the mutational load hypothesis. Spain et al can’t test the mutational load hypothesis: if the relevant genes are rarer or have smaller effect, they wouldn’t notice them at all. On the other hand, if the TIP population really is 5σ, it would be possible to detect more.
* The TIP population is usually described as 0.03% of the population, which is 3.4σ under a normal distribution, but I chose 10⁄3 for simplicity of calculation. They score about 5σ in raw SAT. Self-selection probably means that they’re actually rarer than 0.03%, but probably not much.
My lower bound is that mutational load contributes 10% of the variance in IQ. I call that small. Independently, I propose that there should be room for 50 standard deviations in improvement. Although it’s not clear what more would even mean. Surely linearity would break down. What I mean by the possibility of “50 standard deviations” is 20 disjoint sets of changes, each of which would accomplish 2.5 standard deviations.
If the typical gene is deleterious and contributes 1/N of a standard deviation, then there is room for N standard deviations of improvement above the mean. Of course there is a mixture of genes of different effect sizes. I expect genes of both effect sizes 1⁄10 and 1⁄100. Say, half of each. That gives room for 55 standard deviations of improvement.
If variation came from positive genes, an additive model would suggest much more room for improvement, but such genes would be much less likely to combine well than correcting mutations to the wild type.
It is hard to tell in advance what is important. Quite a few innovations that were promised to change everything turned out to have much more limited value.
Within a decade we should know a lot about the genetic basis of intelligence
I don’t see any reason for it. So far, all knowledge in this area is just correlation between some genes and IQ, with no understanding how it works. Judging from history of other technologies, with such theoretical base any major improvements take centuries of trial and error.
This Wired article claims that a knowledgeable scientists thinks that the “off-target mutations are already a solved problem.”
Even if the Crispr protein itself doesn’t cause mutations you likely will have to doublicate DNA a few time via PCR which produces additional errors.
According to Wikipedia:
This means that a human genome accumulates around 64 new mutations per generation because each full generation involves a number of cell divisions to generate gametes
I think we are very far off from reaching the exactly the same level of mutations or a lower level. The difficult question will be what level of mutations is acceptable.
Within a decade we should know a lot about the genetic basis of intelligence. Wouldn’t it then probably be easy to create embryos that give birth to extremely smart people, far smarter than have ever existed?
If gene A raises IQ and gene B also raises IQ than that doesn’t mean that both genes together will raise IQ even more. The might cancel each other out. A few people will grow to be extremely smart but I don’t think that will be the case for every embryo in the project.
I’m not a biologist, but am I right in thinking that Crispr could be the most important human innovation ever? This Wired article claims that a knowledgeable scientists thinks that the “off-target mutations are already a solved problem.” Within a decade we should know a lot about the genetic basis of intelligence. Wouldn’t it then probably be easy to create embryos that give birth to extremely smart people, far smarter than have ever existed?
Bit late but Aubrey de Grey in his latest reddit AMA estimates that Crispr/CAS9 cuts off about 20 (!) years of the SENS/immortality timeline.
As a man approaching 50, I desperately hope this is true.
Unfortunately I have to retract my above statement, I checked https://www.reddit.com/r/Futurology/comments/3fri9a/ask_aubrey_de_grey_anything/ .
No concrete timeframe, but he also gives estimates:
https://www.reddit.com/r/Futurology/comments/3fri9a/ask_aubrey_de_grey_anything/ctr90ru
Seems as if he gives a 50yo has about 50% chance to be around when SENS comes.
Emphasis on probably—intelligence is not a simple matter, and it is unclear that our genome, even if we clearly identify all relevant factors, would be “open ended”—that is to say, there may be a difference between “making you as smart as you can be” and “making you smarter than any human ever”. As a poor analogy, we will certainly soon be able to make humans taller, but there may be limits to how tall a human can be without important system failures; we have already had very tall people, and even if we do want to breed for tall, we might choose to top out at 7′0″ for health reasons. Likewise, when you think of smart people, it may be that you are thinking of people with skills maximized for specific functions at the cost of other functions, and a balanced intelligence might top out at some level… at least until we get past mapping what we have and into the much harder task of designing new types of genomes.
There are several competing techniques. People who use the other techniques think that CRISPR mostly has better PR, and is a fairly minor technical innovation. Gene editing, regardless of the technique involved, will be tremendously important for the next few decades.
When they say “a solved problem” they mean that the cost of off-target mutations is worth it for a single high-value edit. It is unlikely that most genomes have the option of a high-value edit to improve intelligence. It’s probably more like 1 IQ point per edit. Of course, accuracy in 10 years will be better than accuracy now. In fact, if we truly had no off-target mutations, we could act now, without knowing the structure of intelligence, just by “spell-checking”—correcting rare variants.
Yes, that’s Greg Cochran’s theory. I wonder by how much this could increase IQ? If I were a young billionaire I would be planning to create a clone of myself that didn’t have rare variants.
We don’t know. Cochran’s theory is not well backed by evidence at this point. Most of it is quite indirect like the attempts at quantifying paternal age effects. Emil didn’t turn up anything when I asked the other day. Some of the studies which come to mind which don’t support the idea that mutation load matters much:
“The total burden of rare, non-synonymous exome genetic variants is not associated with childhood or late-life cognitive ability”, Marioni et al 2014 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953855/
“A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence” http://www.nature.com/mp/journal/vaop/ncurrent/full/mp2015108a.html , Spain et al 2015
“Thinking positively: The genetics of high intelligence”, Shakeshaft et al 2015
Spain and Shakeshaft aren’t relevant. Marioni is interesting, but I think 1% is way too high a cutoff.
Sure they are. The mutations involved in mutation load are, almost by definition, rare; if they were singly or in aggregate with large effects, that should show up in surveys on the high end. Instead, we just get that with the low end, which is consistent with there being occasional very rare or de novo mutations which can drastically reduce below average, but not that they will increase multiple or many SDs for the above average who already escaped the retardation-bullets.
If there were aggregate effects, how would they show up in Spain and Shakeshaft? Just going by the abstract, Spain is looking for genes where the rare variant has a positive effect. That is the opposite of the mutational load theory, and they don’t find any. I think Shakeshaft reaches the same conclusion by pedigree analysis.
Say that there are 10k genes, MAF=0.01, each worth 1.5 IQ points. What would Spain detect? If the TIP population is 10/3σ,* then these 10k genes each appear as the mutant 2⁄3 as often, 20 hits, rather than the expected 30. That’s a 2 sigma event. So if an oracle gave you this list of 10k genes, you could use Spain to confirm it. But if you have to find the list, it’s harder. They should expect 5k false positives among the 200k variants that they tested. If all of the true genes were among the 200k, there would be 15k hits rather than the expected 5k, confirming the theory. But with poor coverage, the true hits might be lost in the noise. And even if they have good coverage, they have restricted to non-synonymous protein coding mutations.
Moreover that model is what Steve Hsu believes, not the mutational load hypothesis. Spain et al can’t test the mutational load hypothesis: if the relevant genes are rarer or have smaller effect, they wouldn’t notice them at all. On the other hand, if the TIP population really is 5σ, it would be possible to detect more.
* The TIP population is usually described as 0.03% of the population, which is 3.4σ under a normal distribution, but I chose 10⁄3 for simplicity of calculation. They score about 5σ in raw SAT. Self-selection probably means that they’re actually rarer than 0.03%, but probably not much.
My lower bound guess, if rare variants turn out to be only a small portion of IQ, is 5 standard deviations.
Your answer confuses me. Why so much if “rare variants turn out to be only a small portion of IQ”?
My lower bound is that mutational load contributes 10% of the variance in IQ. I call that small. Independently, I propose that there should be room for 50 standard deviations in improvement. Although it’s not clear what more would even mean. Surely linearity would break down. What I mean by the possibility of “50 standard deviations” is 20 disjoint sets of changes, each of which would accomplish 2.5 standard deviations.
If the typical gene is deleterious and contributes 1/N of a standard deviation, then there is room for N standard deviations of improvement above the mean. Of course there is a mixture of genes of different effect sizes. I expect genes of both effect sizes 1⁄10 and 1⁄100. Say, half of each. That gives room for 55 standard deviations of improvement.
If variation came from positive genes, an additive model would suggest much more room for improvement, but such genes would be much less likely to combine well than correcting mutations to the wild type.
It is hard to tell in advance what is important. Quite a few innovations that were promised to change everything turned out to have much more limited value.
I don’t see any reason for it. So far, all knowledge in this area is just correlation between some genes and IQ, with no understanding how it works. Judging from history of other technologies, with such theoretical base any major improvements take centuries of trial and error.
Even if the Crispr protein itself doesn’t cause mutations you likely will have to doublicate DNA a few time via PCR which produces additional errors.
According to Wikipedia:
I think we are very far off from reaching the exactly the same level of mutations or a lower level. The difficult question will be what level of mutations is acceptable.
If gene A raises IQ and gene B also raises IQ than that doesn’t mean that both genes together will raise IQ even more. The might cancel each other out. A few people will grow to be extremely smart but I don’t think that will be the case for every embryo in the project.