Even the brightest geniuses don’t really start having much of an impact on a field until about 20. And it takes further time for ideas to spread, so perhaps they’d need to reach the age of 30.
We could probably create humans vastly smarter than have ever previously existed with full genome synthesis, who could have a huge impact at a much younger age. But otherwise I agree.
Another short-term technology not even mentioned on your list is gamete sequencing. Sperm and eggs are produced in groups of four, with two complementary pairs per stem cell. If we could figure out how to get a good enough read from three of those cells, we could infer the genome of the fourth and pair up the best sperm and egg. That would naively allow us to double the gain, so 24 points.
Wouldn’t it be a factor of sqrt(2), not double?
There are other technologies like in-vitro oogenesis that could raise the gain by perhaps 50% (assuming we could produce a couple of thousand embryos). And there are groups that are working on that right now.
That sounds fairly promising and worth looking into.
I don’t think genome synthesis is likely to be possible in time. CRISPR or some other editing technique might work in the next 10 years, but the public seems to be much less comfortable with editing as opposed to selection, so that might be more politically difficult.
Agreed, which makes my previous point somewhat moot. I’m tempted to say we should at least keep synthesis in the back of our minds in case the problems on the critical path end up being easier than expected.
Lastly, even if we could create such a predictor, what weirdo parents would select for “likely to work on x-risk-reduction”? The parents themselves would have to be convinced that x-risk is a problem, so it’s a somewhat circular solution.
Alignment-problem-aware people could be early adopters of embryo-selection-for-G. There are lots of smart alignment-problem-aware people who read this forum and may be open to this idea, so it’s not necessarily circular.
I am very nervous about any solutions which require the government to enforce selection for certain traits.
I think it’s super unlikely we’d be able to get this sort of large scale coordination anyways.
The only strategy that seems viable to me is enhanced intelligence + changing the memetic environment. I don’t think genetics is going to provide a substitute for the work that has to be done by us stone-brainers to convince more people that misaligned AI is a serious threat.
I don’t think large scale awareness is necessary (see my above point). Even if you could do it, pushing for large scale awareness could backfire by drawing the wrong sort of attention (e.g. by resulting in public outrage about selection-for-G so politicians move to ban it). Though I admittedly don’t place much confidence in my current ability to gauge the likelihood of this sort of thing. More awareness of the alignment problem is probably good.
I am also optimistic that more intelligent people would better grasp the arguments about AI safety and other sources of X-risk. There’s also some research about intelligent people’s disproportionate tendency to support enforcement of rules encouraging positive-sum cooperation that I wrote about in my first post on genetic engineering, so I can see this potentially helping with the coordination aspects of AI and other fields.
Anyhow, I’ve updated slightly towards focusing more on thinking about near-term embryo selection strategies as a result of reading and responding to this.
We could probably create humans vastly smarter than have ever previously existed with full genome synthesis, who could have a huge impact at a much younger age. But otherwise I agree.
This is true, but the farther out into the tails of the distribution we get the more likely we are to see negative effects that from traits that aren’t part of the index we’re selecting on. For example, I would be pretty surprised if we could increase IQ by 10 standard deviations in one generation without some kind of serious deleterious effects.
Alignment-problem-aware people could be early adopters of embryo-selection-for-G. There are lots of smart alignment-problem-aware people who read this forum and may be open to this idea, so it’s not necessarily circular.
Yeah, this is one of my hopes. I will probably write something about this in the future.
I don’t think large scale awareness is necessary (see my above point). Even if you could do it, pushing for large scale awareness could backfire by drawing the wrong sort of attention (e.g. by resulting in public outrage about selection-for-G so politicians move to ban it). Though I admittedly don’t place much confidence in my current ability to gauge the likelihood of this sort of thing. More awareness of the alignment problem is probably good.
I mostly think the value would be in more actual understanding of alignment difficulties among people working on AI capabilities.
This is true, but the farther out into the tails of the distribution we get the more likely we are to see negative effects that from traits that aren’t part of the index we’re selecting on.
True, but we wouldn’t need to strictly select for G by association with IQ via GWASes. I suspect G variation is largely driven by mutation load, in which case simply replacing each rare variant with one of its more common counterparts should give you a huge boost while essentially ruling out negative pleiotropy. To hedge your bets you’d probably want to do a combined approach.
I guess there’s some risk that rare variants are involved in people who, e.g., tend to take x-risk very seriously, but I doubt this. I suspect that, to whatever extent this is heritable, it’s controlled by polygenic variation over relatively common variants at many loci. So if you started out with the genomes of people who care lots about x-risk and then threw out all the rare variants, I predict you’d end up with hugely G boosted people who are predisposed to care about x-risk.
As you pointed out, this is moot if genome synthesis is out of reach.
I mostly think the value would be in more actual understanding of alignment difficulties among people working on AI capabilities.
We could probably create humans vastly smarter than have ever previously existed with full genome synthesis, who could have a huge impact at a much younger age. But otherwise I agree.
Wouldn’t it be a factor of sqrt(2), not double?
That sounds fairly promising and worth looking into.
Agreed, which makes my previous point somewhat moot. I’m tempted to say we should at least keep synthesis in the back of our minds in case the problems on the critical path end up being easier than expected.
Alignment-problem-aware people could be early adopters of embryo-selection-for-G. There are lots of smart alignment-problem-aware people who read this forum and may be open to this idea, so it’s not necessarily circular.
I think it’s super unlikely we’d be able to get this sort of large scale coordination anyways.
I don’t think large scale awareness is necessary (see my above point). Even if you could do it, pushing for large scale awareness could backfire by drawing the wrong sort of attention (e.g. by resulting in public outrage about selection-for-G so politicians move to ban it). Though I admittedly don’t place much confidence in my current ability to gauge the likelihood of this sort of thing. More awareness of the alignment problem is probably good.
Agreed, society wide gains in G would likely have the general effect of raising the sanity waterline.
Anyhow, I’ve updated slightly towards focusing more on thinking about near-term embryo selection strategies as a result of reading and responding to this.
This is true, but the farther out into the tails of the distribution we get the more likely we are to see negative effects that from traits that aren’t part of the index we’re selecting on. For example, I would be pretty surprised if we could increase IQ by 10 standard deviations in one generation without some kind of serious deleterious effects.
I have to admit, I haven’t actually done the math here, but Gwern seems to think it would roughly double the effect.
Yeah, this is one of my hopes. I will probably write something about this in the future.
I mostly think the value would be in more actual understanding of alignment difficulties among people working on AI capabilities.
Thanks for the response.
True, but we wouldn’t need to strictly select for G by association with IQ via GWASes. I suspect G variation is largely driven by mutation load, in which case simply replacing each rare variant with one of its more common counterparts should give you a huge boost while essentially ruling out negative pleiotropy. To hedge your bets you’d probably want to do a combined approach.
I guess there’s some risk that rare variants are involved in people who, e.g., tend to take x-risk very seriously, but I doubt this. I suspect that, to whatever extent this is heritable, it’s controlled by polygenic variation over relatively common variants at many loci. So if you started out with the genomes of people who care lots about x-risk and then threw out all the rare variants, I predict you’d end up with hugely G boosted people who are predisposed to care about x-risk.
As you pointed out, this is moot if genome synthesis is out of reach.
Seems sensible.