The hope is that local neural function could be altered in a way that improves fluid intelligence, and/or that larger scale structural changes could happen in response to the edits (possibly contingent on inducing a childlike state of increased plasticity).
kman
Showing that many genes can be successfully and accurately edited in a live animal (ideally human). As far as I know, this hasn’t been done before! Only small edits have been demonstrated.
This is more or less our current plan.
Showing that editing embryos can result in increased intelligence. I don’t believe this has even been done in animals, let alone humans.
This has some separate technical challenges, and is also probably more taboo? The only reason that successfully editing embryos wouldn’t increase intelligence is that the variants being targeted weren’t actually causal for intelligence.
Gene editing to make people taller.
This seems harder, you’d need to somehow unfuse the growth plates.
on the other hand, “our patients increased 3 IQ points, we swear” is not as easily verifiable
A nice thing about IQ is that it’s actually really easy to measure. Noisier than measuring height, sure, but not terribly noisy.
They all also will make you rich, and they should all be easier than editing the brain. Why do rationalists always jump to the brain?
More intelligence enables progress on important, difficult problems, such as AI alignment.
Probably not? The effect sizes of the variants in question are tiny, which is probably why their intelligence-promoting alleles aren’t already at fixation.
There probably are loads of large effect size variants which affect intelligence, but they’re almost all at fixation for the intelligence-promoting allele due to strong negative selection. (One example of a rare intelligence promoting mutation is CORD7, which also causes blindness).
I think that most are focusing on single-gene treatments because that’s the first step. If you can make a human-safe, demonstrably effective gene-editing vector for the brain, then jumping to multiplex is a much smaller step (effective as in does the edits properly, not necessarily curing a disease). If this were a research project I’d focus on researching multiplex editing and letting the market sort out vector and delivery.
Makes sense.
I am more concerned about the off-target effects; neurons still mostly function with a thousand random mutations, but you are planning to specifically target regions that have a supposed effect. I would assume that most effects in noncoding regions are regulator binding sites (alternately: ncRNA?), which are quite sensitive to small sequence changes. My assumption would be a higher likelihood of catastrophic mutations (than you assume).
The thing we’re most worried about here is indels at the target sites. The hope is that adding or subtracting a few bases won’t be catastrophic since the effect of the variants at the target sites are tiny (and we don’t have frameshifts to worry about). Of course, the sites could still be sensitive to small changes while permitting specific variants.
I wonder whether disabling a regulatory binding site would tend to be catastrophic for the cell? E.g. what would be the effect of losing one enhancer (of which there are many per gene on average)? I’d guess some are much more important than others?
This is definitely a crux for whether mass brain editing is doable without a major breakthrough: if indels at target sites are a big deal, then we’d need to wait for editors with negligible indel rates (maybe per successful edit, while the current best editors are more like to ).
Also, given that your target is in nonreplicating cells, buildup of unwanted protein might be an issue if you’re doing multiple rounds of treatment.
If the degradation of editor proteins turns out to be really slow in neurons, we could do a lower dose and let them ‘hang around’ for longer. Final editing efficiency is related to the product of editor concentration and time of exposure. I think this could actually be a good thing because it would put less demand on delivery efficiency.
Additionally, I’m guessing a number of edits will have no effect as their effect is during development. If only we had some idea how these variants worked so we can screen them out ahead of time.
Studying the transciptome of brain tissue is a thing. That could be a way to find the genes which are significantly expressed in adults, and then we’d want to identify variants which affect expression of those genes (spatial proximity would be the rough and easy way).
Significant expression in adults is no guarantee of effect, but seems like a good place to start.
Finally, this all assumes that intelligence is a thing and can be measured. Intelligence is probably one big phase space, and measurements capture a subset of that, confounded by other factors. But that’s getting philosophical, and as long as it doesn’t end up as eugenics (Gattaca or Hitler) it’s probably fine.
g sure seems to be a thing and is easy to measure. That’s not to say there aren’t multiple facets of intelligence/ability—people can be “skewed out” in different ways that are at least partially heritable, and maintaining cognitive diversity in the population is super important.
One might worry that psychometric g is the principal component of the easy to measure components of intelligence, and that there are also important hard to measure components (or important things that aren’t exactly intelligence components / abilities, e.g. wisdom). Ideally we’d like to select for these too, but we should probably be fine as long as we aren’t accidentally selecting against them?
Really interesting, thanks for commenting.
My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD.
Are you doing traditional gene therapy or CRISPR-based editing?
If the former, I’d guess you’re using Lentivirus because you want genome integration?
If the latter, why not use Lipofectamine?
How do you use electroporation?
Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and that is with the help of chemicals like Polybrene, which significantly increases viral uptake and is not an option for in-vivo editing.
Does this refer to the proportion of the remaining cells which had successful edits / integration of donor gene? Or the number that were transfected at all (in which case how is that measured)?
Essentially, in order to make this work for in-vivo gene editing of an entire organ (particularly the brain), you need your transduction efficiency to be at least 2-3 orders of magnitude higher than the current technologies allow on their own just to make up for the lack of polybrene/retronectin in order to hit your target 50%.
This study achieved up to 59% base editing efficiency in mouse cortical tissue, while this one achieved up to 42% prime editing efficiency (both using a dual AAV vector). These contributed to our initial optimism that the delivery problem wasn’t completely out of reach. I’m curious what you think of these results, maybe there’s some weird caveat I’m not understanding.
The short answer is that they are, but they are doing it in much smaller steps. Rather than going straight for the holy grail of editing an organ as large and complex as the brain, they are starting with cell types and organs that are much easier to make edits to.
This is my belief as well—though the dearth of results on multiplex editing in the literature is strange. E.g. why has no one tried making 100 simultaneous edits at different target sequences? Maybe it’s obvious to the experts that the efficiency would be to low to bother with?
The smaller size of Fanzors compared to Cas9 is appealing and the potential for lower immunogenicity could end up being very important for multiplex editing (if inflammation in off-target tissues is a big issue, or if an immune response in the brain turns out to be a risk).
The most important things are probably editing efficiency and the ratio of intended to unintended edits. Hard to know how that will shake out until we have Fanzor equivalents of base and prime editors.
(I should clarify, I don’t see modification of polygenic traits just as a last ditch hail mary for solving AI alignment—even in a world where I knew AGI wasn’t going to happen for some reason, the benefits pretty clearly outweigh the risks. The case for moving quickly is reduced, though.)
The stakes could hardly be more different—polygenic trait selection doesn’t get everyone killed if we get it slightly wrong.
How large are the Chinese genotype datasets?
The scaling laws are extremely well established in DL and there are strong theoretical reasons (and increasingly experimental neurosci evidence) that they are universal to all NNs, and we have good theoretical models of why they arise.
I’m not aware of these—do you have any references?
Both brains and DL systems have fairly simple architectural priors in comparison to the emergent learned complexity
True but misleading? Isn’t the brain’s “architectural prior” a heckuva lot more complex than the things used in DL?
Brains are very slow so have limited combinatorial search, and our search/planning is just short term learning (short/medium term plasticity). Again it’s nearly all learning (synaptic updates).
Sure. The big crux here is whether plasticity of stuff which is normally “locked down” in adulthood is needed to significantly increase “fluid intelligence” (by which I mean, something like, whatever allows people to invent useful new concepts and find ingenious applications of existing concepts). I’m not convinced these DL analogies are useful—what properties do brains and deepnets share that renders the analogies useful here? DL is a pretty specific thing, so by default I’d strongly expect brains to differ in important ways. E.g. what if the structures whose shapes determine the strength of fluid intelligence aren’t actually “locked down”, but reach a genetically-influenced equilibrium by adulthood, and changing the genes changes the equilibrium? E.g. what if working memory capacity is limited by the noisiness of neural transmission, and we can reduce the noisiness through gene edits?
I find the standard arguments for doom implausible—they rely on many assumptions contradicted by deep knowledge of computational neuroscience and DL
FOOM isn’t necessary for doom—the convergent endpoint is that you have dangerously capable minds around: minds which can think much faster and figure out things we can’t. FOOM is one way to get there.
Of course if you combine gene edits with other interventions to rejuvenate older brains or otherwise restore youthful learning rate more is probably possible
We thought a bit about this, though it didn’t make the post. Agree that it increases the chance of the editing having a big effect.
ANNs and BNNs operate on the same core principles; the scaling laws apply to both and IQ in either is a mostly function of net effective training compute and data quality.
How do you know this?
Genes determine a brain’s architectural prior just as a small amount of python code determines an ANN’s architectural prior, but the capabilities come only from scaling with compute and data (quantity and quality).
In comparing human brains to DL, training seems more analogous to natural selection than to brain development. Much simpler “architectural prior”, vastly more compute and data.
So you absolutely can not take datasets of gene-IQ correlations and assume those correlations would somehow transfer to gene interventions on adults
We’re really uncertain about how much would transfer! It would probably affect some aspects of intelligence more than others, and I’m afraid it might just not work at all if g is determined by the shape of structures that are ~fixed in adults (e.g. long range white matter connectome). But it’s plausible to me that the more plastic local structures and the properties of individual neurons matter a lot for at least some aspects of intelligence (e.g. see this).
so to the extent this could work at all, it is mostly limited to interventions on children and younger adults who still have significant learning rate reserves
There’s a lot more to intelligence than learning. Combinatorial search, unrolling the consequences of your beliefs, noticing things, forming new abstractions. One might consider forming new abstractions as an important part of learning, which it is, but it seems possible to come up with new abstractions ‘on the spot’ in a way that doesn’t obviously depend on plasticity that much; plasticity would more determine whether the new ideas ‘stick’. I’m bottlenecked by the ability to find new abstractions that usefully simplify reality, not having them stick when I find them.
But it ultimately doesn’t matter, because the brain just learns too slowly. We are now soon past the point at which human learning matters much.
My model is there’s this thing lurking in the distance, I’m not sure how far out: dangerously capable AI (call it DCAI). If our current civilization manages to cough up one of those, we’re all dead, essentially by definition (if DCAI doesn’t kill everyone, it’s because technical alignment was solved, which our current civilization looks very unlikely to accomplish). We look to be on a trajectory to cough one of those up, but It isn’t at all obvious to me that it’s just around the corner: so stuff like this seems worth trying, since humans qualitatively smarter than any current humans might have a shot at thinking of a way out that we didn’t think of (or just having the mental horsepower to quickly get working something we have thought of, e.g. getting mind uploading working).
Repeat administration is a problem for traditional gene therapy too, since the introduced gene will often be eliminated rather than integrated into the host genome.
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
Mildly deleterious mutations take a long time to get selected out, so you end up with an equilibrium where a small fraction of organisms have them. Genetic load is a relevant concept.
It seems fairly straightforward to test whether a chromosome transfer protocol results in physical/genetic damage in small scale experiments (e.g. replace chromosome X in cell A with chromosome Y in cell B, culture cell A, examine cell A’s chromosomes under a microscope + sequence the genome).
The epigenetics seems harder. Having a good gears-level understanding of the epigenetics of development seems necessary, because then you’d know what to measure in an experiment to test whether your protocol was epigenetically sound.
You probably wouldn’t be able to tell if the fruit fly’s development was “normal” to the same standards that we’d hold a human’s development to (human development is also just way more complicated, so the results may not generalize). That said, this sort of experiment seems worth doing anyways; if someone on LW was able to just go out and do it, that would be great.
A working protocol hasn’t been demonstrated yet, but it looks like there’s a decent chance it’s doable with the right stitching together of existing technologies and techniques. You can currently do things like isolating a specific chromosome from a cell line, microinjecting a chromosome into the nucleus of a cell, or deleting a specific chromosome from a cell. The big open questions are around avoiding damage and having the correct epigenetics for development.
From section 3.1.2:
C. The EU passes such a law. 90%
...
M. There’s nowhere that Jurgen Schmidhuber (currently in Saudi Arabia!) wants to move where he’s allowed to work on dangerously advanced AI, or he retires before he can make it. 50%
These credences feel borderline contradictory to me. M implies you believe that, conditional on no laws being passed which would make it illegal in any place he’d consider moving to, Jurgen Schmidhuber in particular has a >50% chance of building dangerously advanced AI within 20 years or so. Since you also believe the EU has a 90% chance of passing such a law before the creation of dangerously advanced AI, this implies you believe the EU has a >80% chance of outlawing the creation of dangerously advanced AI within 20 years or so. In fact, if we assume a uniform distribution over when JS builds dangerously advanced AI (such that it’s cumulatively 50% 20 years from now), that requires us to be nearly certain the EU would pass such a law within 10 years if we make it that long before JS succeeds. From where does such high confidence stem?
(Meta: I’m also not convinced it’s generally a good policy to be “naming names” of AGI researchers who are relatively unconcerned about the risks in serious discussions about AGI x-risk, since this could provoke a defensive response, “doubling down”, etc.)
allow multiple causal variants per clump
more realistic linkage disequilibrium structure
more realistic effect size and allele frequency distributions
it’s not actually clear to me the current ones aren’t realistic, but this could be better informed by data
this might require better datasets
better estimates of SNP heritability and number of causal variants
we just used some estimates which are common in the literature (but there’s a pretty big range of estimates in the literature)
this also might require better datasets