I am a relatively recently reformed geneticist/molecular biologist and previously used CRISPR/Cas9 at the bench in an experimental context. I no longer work in the lab and admit am not well-read on the latest literature.
I think this approach is interesting, and theoretically executable, but practically infeasible at the current maturity level of the relevant technologies. I’m not sure such a mission would be a good use of expertise and money at this stage. I share the views of a lot of the top level commenters here about the limited feasibility of the approach on both scientific and societal grounds.
I will not repeat the concerns made by others but have two related comments which, while they have been touched on by yourself and others already, hopefully add to the discussion:
1. I am skeptical of claims that editing techniques have zero/minimal/negligible off-target effects generally, and think you need to have an exceptionally, exceptionally strong evidence base to support this before getting near humans, considering the volume of edits you need to make.
You acknowledge this but I feel you downplay the risk of cancer—an accidental point mutation in a tumour suppressor gene or regulatory region in a single founder cell could cause a tumour. Historically gene therapies (e.g for x-linked SCID) have had issues with this in the past. While the mechanics of this are different, there are obviously similarities which would make me (and any regulatory authority) extremely cautious.
2. You propose that variants in non-coding regions would be preferable to target, since off-target mutations would have less effect in non-coding regions.
Firstly, if you’re this worried about off-targets/incorrect edits then this is quite a major concern for the feasibility of the approach (see #1 above). If you’re not actually worried about off-targets/incorrect edits then you should surely be confident to target coding regions?
Secondly, I don’t follow the logic that choosing non-coding targets would be safer as this would lead to only non-coding off-target mutations. As far as I am aware off-target mutations can take place anywhere in the genome, and it is not the case that having a target in a non-coding region would mean that off-targets were also in non-coding regions. [Unless you are using the term “off-target” to refer to any incorrect edit of the target site, and wider unwanted edits—in my community this term referred specifically to ectopic edits elsewhere in the genome away from the target site.]
I wondered if this assertion was based on there being evidence that editing of the type you propose to use has a greater risk of off-target mutations the closer you are to the target site? But even if that is the case, non-coding and coding regions are adjacent to each other in the genome so a nearby mutation could just as well affect a coding region.
You acknowledge this but I feel you downplay the risk of cancer—an accidental point mutation in a tumour suppressor gene or regulatory region in a single founder cell could cause a tumour.
For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.
Unless you are using the term “off-target” to refer to any incorrect edit of the target site, and wider unwanted edits—in my community this term referred specifically to ectopic edits elsewhere in the genome away from the target site.
This is exactly it—the term “off-target” was used imprecisely in the post to keep things simple. The thing we’re most worried about here is misedits (mostly indels) at noncoding target sites. We know a target site does something (if the variant there is in fact causal), so we might worry that an indel will cause a big issue (e.g. disabling a promoter binding site). Then again, the causal variant we’re targeting has a very small effect, so maybe the sequence isn’t very sensitive and an indel won’t be a big deal? But it also seems perfectly possible that the sequence could be sensitive to most mutations while permitting a specific variant with a small effect. The effect of an indel will at least probably be less bad than in a coding sequence, where it has a high chance of causing a frameshift mutation and knocking out the coded-for protein.
The important figure of merit for editors with regards to this issue is the ratio of correct edits to misedits at the target site. In the case of prime editors, IIUC, all misedits at the target site are reported as “indels” in the literature (base editors have other possible outcomes such as bystander edits or conversion to the wrong base). Some optimized prime editors have edit:indel ratios of >100:1 (best I’ve seen so far is 500:1, though IIUC this was just at two target sites, and the rates seem to vary a lot by target site). Is this good enough? I don’t know, though I suspect not for the purposes of making a thousand edits. It depends on how large the negative effects of indels are at noncoding target sites: is there a significant risk the neuron gets borked as a result? It might be possible to predict this on a site-by-site basis with a better understanding of the functional genomics of the sequences housing the causal variants which affect polygenic traits (which would also be useful for finding the causal variants in the first place without needing as much data).
I am a relatively recently reformed geneticist/molecular biologist and previously used CRISPR/Cas9 at the bench in an experimental context. I no longer work in the lab and admit am not well-read on the latest literature.
I think this approach is interesting, and theoretically executable, but practically infeasible at the current maturity level of the relevant technologies. I’m not sure such a mission would be a good use of expertise and money at this stage. I share the views of a lot of the top level commenters here about the limited feasibility of the approach on both scientific and societal grounds.
I will not repeat the concerns made by others but have two related comments which, while they have been touched on by yourself and others already, hopefully add to the discussion:
1. I am skeptical of claims that editing techniques have zero/minimal/negligible off-target effects generally, and think you need to have an exceptionally, exceptionally strong evidence base to support this before getting near humans, considering the volume of edits you need to make.
You acknowledge this but I feel you downplay the risk of cancer—an accidental point mutation in a tumour suppressor gene or regulatory region in a single founder cell could cause a tumour. Historically gene therapies (e.g for x-linked SCID) have had issues with this in the past. While the mechanics of this are different, there are obviously similarities which would make me (and any regulatory authority) extremely cautious.
2. You propose that variants in non-coding regions would be preferable to target, since off-target mutations would have less effect in non-coding regions.
Firstly, if you’re this worried about off-targets/incorrect edits then this is quite a major concern for the feasibility of the approach (see #1 above). If you’re not actually worried about off-targets/incorrect edits then you should surely be confident to target coding regions?
Secondly, I don’t follow the logic that choosing non-coding targets would be safer as this would lead to only non-coding off-target mutations. As far as I am aware off-target mutations can take place anywhere in the genome, and it is not the case that having a target in a non-coding region would mean that off-targets were also in non-coding regions. [Unless you are using the term “off-target” to refer to any incorrect edit of the target site, and wider unwanted edits—in my community this term referred specifically to ectopic edits elsewhere in the genome away from the target site.]
I wondered if this assertion was based on there being evidence that editing of the type you propose to use has a greater risk of off-target mutations the closer you are to the target site? But even if that is the case, non-coding and coding regions are adjacent to each other in the genome so a nearby mutation could just as well affect a coding region.
For each target the likely off-targets can be predicted, allowing one to avoid particularly risky edits. There may still be issues with sequence-independent off-targets, though I believe these are a much larger problem with base editors than with prime editors (which have lower off-target rates in general). Agree that this might still end up being an issue.
This is exactly it—the term “off-target” was used imprecisely in the post to keep things simple. The thing we’re most worried about here is misedits (mostly indels) at noncoding target sites. We know a target site does something (if the variant there is in fact causal), so we might worry that an indel will cause a big issue (e.g. disabling a promoter binding site). Then again, the causal variant we’re targeting has a very small effect, so maybe the sequence isn’t very sensitive and an indel won’t be a big deal? But it also seems perfectly possible that the sequence could be sensitive to most mutations while permitting a specific variant with a small effect. The effect of an indel will at least probably be less bad than in a coding sequence, where it has a high chance of causing a frameshift mutation and knocking out the coded-for protein.
The important figure of merit for editors with regards to this issue is the ratio of correct edits to misedits at the target site. In the case of prime editors, IIUC, all misedits at the target site are reported as “indels” in the literature (base editors have other possible outcomes such as bystander edits or conversion to the wrong base). Some optimized prime editors have edit:indel ratios of >100:1 (best I’ve seen so far is 500:1, though IIUC this was just at two target sites, and the rates seem to vary a lot by target site). Is this good enough? I don’t know, though I suspect not for the purposes of making a thousand edits. It depends on how large the negative effects of indels are at noncoding target sites: is there a significant risk the neuron gets borked as a result? It might be possible to predict this on a site-by-site basis with a better understanding of the functional genomics of the sequences housing the causal variants which affect polygenic traits (which would also be useful for finding the causal variants in the first place without needing as much data).