That is, what is the technological constraint that is most limiting our ability to significantly enhance the cognitive performance of people born this year?
Links to long-form sources that answer this, and related questions, would be appreciated.
That is, what is the technological constraint that is most limiting our ability to significantly enhance the cognitive performance of people born this year?
Links to long-form sources that answer this, and related questions, would be appreciated.
The question title says “bottleneck”, but the body says “technological constraint.” But I wonder—is the bottleneck a technological constraint, or is it a political constraint? (That is, maybe the expertise exists, but literally no one has the conjunction of expertise and bravery to actually do it.) The FAQ page of Genomic Prediction says:
Does Genomic Prediction Clinical Laboratory screen embryos for increased intelligence i.e. high IQ?
No. We only screen against negative (disease) risks.
But if you’re already providing polygenic scores for the purpose of choosing an IVF embryo for implantation, there would seem to be no technical obstacle to using a polygenic score for intelligence. Right?
Given that you can buy all sorts of things on the black market, politics are only a constraint when it comes to doing such projects for prestigue and not for doing them because you actually know how to make very intelligent babies and have parents who want that.
I don’t really have much but this is at least from last year:
Steve Hsu discusses Human Genetic Engineering and CRISPR babies in this (the first?) episode of the podcast he has w/ Corey Washington https://manifoldlearning.com/podcast-episode-1-crispr-babies/?utm_source=rss&utm_medium=rss&utm_campaign=podcast-episode-1-crispr-babies
Transcript: https://manifoldlearning.com/episode-001-transcript/
The best way to radically increase the intelligence of humans would be to use Greg Cochran’s idea of replacing rare genetic variations with common ones thereby greatly reducing mutational load. Because of copying errors, new mutations keep getting introduced into populations, but evolutionary selection keeps working to reduce the spread of harmful mutations. Consequently, if an embryo has a mutation that few other people have it is far more likely that this mutation is harmful than beneficial. Replacing all rare genetic variations in an embryo with common variations would likely result in the eventual creation of a person much smarter and healthier than has ever existed. The primary advantage of Cochran’s genetic engineering approach is that we can implement it before we learn the genetic basis of human intelligence. The main technical problem, from what I understand, from implementing this approach is the inability to edit genes with sufficient accuracy, at sufficiently low cost, and with sufficiently low side effects.
It seems like there could be a tail risk here of decreasing genetic variation and thereby increasing the impact of something like a pandemic. It also seems like this approach could lead to less diversity in ideas/art/businesses etc because genetic predispositions become a mono-culture.
Do you think either of these things are substantial worries? Am I misunderstanding something about what’s being suggested here?
Think of mutational load as errors. Reducing errors in the immune system’s genetic code should decrease the risk of pandemics. Reducing errors in people’s brains should greatly increase the quality of intellectual output. Hitting everyone in the head with a hammer a few times could, I suppose, through an extraordinarily lucky hit cause someone to produce something good that they otherwise wouldn’t but most likely the hammer blows (analogous to mutational load) just gives us bad stuff.
An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.
While it’s pausible that there will be a future where that’s cheaper it’s currently 9-figures for synthecizing a human genome from scratch. Whether or not there will be a time where that’s cheaper then more targeted modifications is very open.
In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it’s easier to create a genome from scratch.
I agree it’s an open question though!
I don’t see why that’s should be the case. I see no principle reasons why you shouldn’t be able to scale up targeted changes to do 20,000 changes (~number of human genes).
If you want to zero out mutational load and create a modal genome, you’re approximately 2 orders of magnitude off in the number of edits you need to do. (The number of evolutionarily-conserved protein-coding regions has little to do with the number of total variants across the billions of basepairs in a specific human’s genome.) Considering that it is unlikely that we will get base editors, ever, which have total error rates approaching 1 in millions, one will have to be a little more clever about how one goes about it. (Maybe some sort of multi-generational mass mutagenesis-editing / screening loop?)
Anyway, I would point out that you can do genome synthesis on a much cheaper scale than whole-genome: whole-chromosome is an obvious intermediate point which would be convenient to swap out. And for polygenic traits, optimizing a single chromosome might push the phenotype out as far as you want to go in a single generation anyway.
Do you have a citation for this?
I think it is somewhere in this podcast we did:
https://soundcloud.com/user-519115521/cochran-on-increasing-iq