“crazy, unpredictable, and dangerous” are all “potentially surmountable issues”. It’s just that we need more research into them before they stop being crazy, unpredictable, and dangerous. (except quantum I guess)
I think that most are focusing on single-gene treatments because that’s the first step. If you can make a human-safe, demonstrably effective gene-editing vector for the brain, then jumping to multiplex is a much smaller step (effective as in does the edits properly, not necessarily curing a disease). If this were a research project I’d focus on researching multiplex editing and letting the market sort out vector and delivery.
I am more concerned about the off-target effects; neurons still mostly function with a thousand random mutations, but you are planning to specifically target regions that have a supposed effect. I would assume that most effects in noncoding regions are regulator binding sites (alternately: ncRNA?), which are quite sensitive to small sequence changes. My assumption would be a higher likelihood of catastrophic mutations (than you assume).
Promoters have a few of important binding motifs whose spacing is extremely precise, but most of the binding motifs are a lot more flexible in how far away they are from each other.
Also, given that your target is in nonreplicating cells, buildup of unwanted protein might be an issue if you’re doing multiple rounds of treatment.
The accuracy of your variant data could/should be improved as well; most GWAS-based heritability data assumes random mating which humans probably don’t do. But if you’re planning on redoing/rechecking all the variants that’d be more accurate.
Additionally, I’m guessing a number of edits will have no effect as their effect is during development. If only we had some idea how these variants worked so we can screen them out ahead of time. I’m not sure what percent of variants would only have an effect during development, so you’ll need to do a lot more edits than strictly necessary and/or a harder time detecting any effects of the edits. Luckily, genes that are always off are more likely to be silenced, so they might be harder to edit.
Though I would avoid editing unsilenced genes anyways, because they’re generally off and not being expressed (and therefore less likely to have a current effect) and the act of editing usually unsilences the genes for a bit, which is an additional level of disruption you probably don’t want to deal with.
I don’t know how the Biobank measures “intelligence” but make sure it corresponds with what you’re trying to maximize [insert rehash of IQ test accuracy].
Finally, this all assumes that intelligence is a thing and can be measured. Intelligence is probably one big phase space, and measurements capture a subset of that, confounded by other factors. But that’s getting philosophical, and as long as it doesn’t end up as eugenics (Gattaca or Hitler) it’s probably fine.
Honestly just multiplex editing by itself would be useful and impressive, you don’t have to focus on intelligence. Perhaps something like muscle strength or cardiovascular health would be an easier sell.
I think that most are focusing on single-gene treatments because that’s the first step. If you can make a human-safe, demonstrably effective gene-editing vector for the brain, then jumping to multiplex is a much smaller step (effective as in does the edits properly, not necessarily curing a disease). If this were a research project I’d focus on researching multiplex editing and letting the market sort out vector and delivery.
Makes sense.
I am more concerned about the off-target effects; neurons still mostly function with a thousand random mutations, but you are planning to specifically target regions that have a supposed effect. I would assume that most effects in noncoding regions are regulator binding sites (alternately: ncRNA?), which are quite sensitive to small sequence changes. My assumption would be a higher likelihood of catastrophic mutations (than you assume).
The thing we’re most worried about here is indels at the target sites. The hope is that adding or subtracting a few bases won’t be catastrophic since the effect of the variants at the target sites are tiny (and we don’t have frameshifts to worry about). Of course, the sites could still be sensitive to small changes while permitting specific variants.
I wonder whether disabling a regulatory binding site would tend to be catastrophic for the cell? E.g. what would be the effect of losing one enhancer (of which there are many per gene on average)? I’d guess some are much more important than others?
This is definitely a crux for whether mass brain editing is doable without a major breakthrough: if indels at target sites are a big deal, then we’d need to wait for editors with negligible indel rates (maybe ≤10−5 per successful edit, while the current best editors are more like 10−3 to 10−2).
Also, given that your target is in nonreplicating cells, buildup of unwanted protein might be an issue if you’re doing multiple rounds of treatment.
If the degradation of editor proteins turns out to be really slow in neurons, we could do a lower dose and let them ‘hang around’ for longer. Final editing efficiency is related to the product of editor concentration and time of exposure. I think this could actually be a good thing because it would put less demand on delivery efficiency.
Additionally, I’m guessing a number of edits will have no effect as their effect is during development. If only we had some idea how these variants worked so we can screen them out ahead of time.
Studying the transciptome of brain tissue is a thing. That could be a way to find the genes which are significantly expressed in adults, and then we’d want to identify variants which affect expression of those genes (spatial proximity would be the rough and easy way).
Significant expression in adults is no guarantee of effect, but seems like a good place to start.
Finally, this all assumes that intelligence is a thing and can be measured. Intelligence is probably one big phase space, and measurements capture a subset of that, confounded by other factors. But that’s getting philosophical, and as long as it doesn’t end up as eugenics (Gattaca or Hitler) it’s probably fine.
g sure seems to be a thing and is easy to measure. That’s not to say there aren’t multiple facets of intelligence/ability—people can be “skewed out” in different ways that are at least partially heritable, and maintaining cognitive diversity in the population is super important.
One might worry that psychometric g is the principal component of the easy to measure components of intelligence, and that there are also important hard to measure components (or important things that aren’t exactly intelligence components / abilities, e.g. wisdom). Ideally we’d like to select for these too, but we should probably be fine as long as we aren’t accidentally selecting against them?
Good post. This looks possible, if not feasible.
“crazy, unpredictable, and dangerous” are all “potentially surmountable issues”. It’s just that we need more research into them before they stop being crazy, unpredictable, and dangerous. (except quantum I guess)
I think that most are focusing on single-gene treatments because that’s the first step. If you can make a human-safe, demonstrably effective gene-editing vector for the brain, then jumping to multiplex is a much smaller step (effective as in does the edits properly, not necessarily curing a disease). If this were a research project I’d focus on researching multiplex editing and letting the market sort out vector and delivery.
I am more concerned about the off-target effects; neurons still mostly function with a thousand random mutations, but you are planning to specifically target regions that have a supposed effect. I would assume that most effects in noncoding regions are regulator binding sites (alternately: ncRNA?), which are quite sensitive to small sequence changes. My assumption would be a higher likelihood of catastrophic mutations (than you assume).
Promoters have a few of important binding motifs whose spacing is extremely precise, but most of the binding motifs are a lot more flexible in how far away they are from each other.
Also, given that your target is in nonreplicating cells, buildup of unwanted protein might be an issue if you’re doing multiple rounds of treatment.
The accuracy of your variant data could/should be improved as well; most GWAS-based heritability data assumes random mating which humans probably don’t do. But if you’re planning on redoing/rechecking all the variants that’d be more accurate.
Additionally, I’m guessing a number of edits will have no effect as their effect is during development. If only we had some idea how these variants worked so we can screen them out ahead of time. I’m not sure what percent of variants would only have an effect during development, so you’ll need to do a lot more edits than strictly necessary and/or a harder time detecting any effects of the edits. Luckily, genes that are always off are more likely to be silenced, so they might be harder to edit.
Though I would avoid editing unsilenced genes anyways, because they’re generally off and not being expressed (and therefore less likely to have a current effect) and the act of editing usually unsilences the genes for a bit, which is an additional level of disruption you probably don’t want to deal with.
I don’t know how the Biobank measures “intelligence” but make sure it corresponds with what you’re trying to maximize [insert rehash of IQ test accuracy].
Finally, this all assumes that intelligence is a thing and can be measured. Intelligence is probably one big phase space, and measurements capture a subset of that, confounded by other factors. But that’s getting philosophical, and as long as it doesn’t end up as eugenics (Gattaca or Hitler) it’s probably fine.
Honestly just multiplex editing by itself would be useful and impressive, you don’t have to focus on intelligence. Perhaps something like muscle strength or cardiovascular health would be an easier sell.
Makes sense.
The thing we’re most worried about here is indels at the target sites. The hope is that adding or subtracting a few bases won’t be catastrophic since the effect of the variants at the target sites are tiny (and we don’t have frameshifts to worry about). Of course, the sites could still be sensitive to small changes while permitting specific variants.
I wonder whether disabling a regulatory binding site would tend to be catastrophic for the cell? E.g. what would be the effect of losing one enhancer (of which there are many per gene on average)? I’d guess some are much more important than others?
This is definitely a crux for whether mass brain editing is doable without a major breakthrough: if indels at target sites are a big deal, then we’d need to wait for editors with negligible indel rates (maybe ≤10−5 per successful edit, while the current best editors are more like 10−3 to 10−2).
If the degradation of editor proteins turns out to be really slow in neurons, we could do a lower dose and let them ‘hang around’ for longer. Final editing efficiency is related to the product of editor concentration and time of exposure. I think this could actually be a good thing because it would put less demand on delivery efficiency.
Studying the transciptome of brain tissue is a thing. That could be a way to find the genes which are significantly expressed in adults, and then we’d want to identify variants which affect expression of those genes (spatial proximity would be the rough and easy way).
Significant expression in adults is no guarantee of effect, but seems like a good place to start.
g sure seems to be a thing and is easy to measure. That’s not to say there aren’t multiple facets of intelligence/ability—people can be “skewed out” in different ways that are at least partially heritable, and maintaining cognitive diversity in the population is super important.
One might worry that psychometric g is the principal component of the easy to measure components of intelligence, and that there are also important hard to measure components (or important things that aren’t exactly intelligence components / abilities, e.g. wisdom). Ideally we’d like to select for these too, but we should probably be fine as long as we aren’t accidentally selecting against them?