I am a Research Associate and Lab Manager in a CAR-T cell Research Lab (email me for credentials specifics), and I find the ideas here very interesting. I will email GeneSmith to get more details on their research, and I am happy to provide whatever resources I can to explore this possibility.
TLDR; Making edits once your editing system is delivered is (relatively) easy. Determining which edits to make is (relatively) easy. (Though you have done a great job with your research on this, I don’t want to come across as discouraging.) Delivering gene editing mechanisms in-vivo, with any kind of scale or efficiency, is HARD.
I still think it may be possible, and I don’t want to discourage anyone from exploring this further. I think the resources and time required to bring this to anything close to clinical application will be more than you are expecting. Probably on the order of 10-20 years, at least many millions (5-10 million?) of USD, in order to get enough data to prove the concept in mice. That may sound like a lot, but I am honestly not sure if I am being appropriately pessimistic. You may be able to advance that timescale with significantly more funding, but only to a point.
Long Version: My biggest concern is your step 1:
“Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits.”
And translating that into step 2:
“Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal.”
I would like to hear more about your research into approaching this problem, but without more information, I am concerned you may be underestimating the difficulty of successfully delivering genetic material to any significant number of cells.
My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD. Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and that is with the help of chemicals like Polybrene, which significantly increases viral uptake and is not an option for in-vivo editing.
When we are discussing transductions, or the transfer of genetic material into a cell, efficiency measures the percentage of cells we can successfully clone a gene into.
Even when we are trying to deliver relatively small genes, we have to use a lot of tricks to get reasonable transduction efficiencies like 70% for actual human T-Cells. We Might use a very high concentration of virus, polybrene, retronectin (a protein that helps latch viruses onto a cell) and centrifuge the cells to force them into contact with the virus/retronectin.
On top of that, when we are getting transduction efficiencies of 70%, that is of the remaining cells. A significant number of the target cells will die due to the stress of viral load. I don’t know for sure how many, but I have always been told that it is typically between 30% and 70% of the cells, and the more viruses you use or, the higher transduction efficiency you go for, the more it will tend toward a higher percentage of those cells dying.
Some things to keep in mind are:
These estimates all use Lentivirus, which is considered a more efficient and less dangerous vector than AAV, mostly because it has been better studied and used.
This is all in vitro; in vivo, specialized defenses in your immune system exist to prevent the spread of viral particles. Injections of Viruses need to be localized, and you can probably only use the sort of virus that does NOT reproduce itself; otherwise, it can cause a destructive infection wherever you put it.
Your brain cannot survive 30%+ cell death. It probably can’t survive 5% cell death unless you do very small areas at a time. These transductions may have to happen for every gene edit you want to make based solely on currently available technology.
Mosaicism is probably not a problem, but keep in mind that there is a selective effect since cases where it is a problem are selected out of your observations since if they are destructive, they won’t be around to be observed. This, of course, would be easily tested out.
Essentially, in order to make this work for in-vivo gene editing of an entire organ (particularly the brain), you need your transduction efficiency to be at least 2-3 orders of magnitude higher than the current technologies allow on their own just to make up for the lack of polybrene/retronectin in order to hit your target 50%. The difference in using Polybrene and Retronectin/Lenti-boost is the difference between 60% transduction efficiency and 1%. You may be able to find non-toxic alternatives to polybrene, but this is not an easy problem, and if you do find something like that, it is worth a ton of money and/or publication credit on its own.
I don’t want to be discouraging here; however, it is important to understand the problem’s real scope.
At a glance, I would say the adenoviral approach is the most likely to work for the scale of transduction you are looking to accomplish. After a quick search, I found thesetwo studies to be the most promising, discussing the deployment of CRISPR/Cas9 systems via AAV. Both use hygromycin B selection (a process whereby cells that were not transduced are selected out since hygromycin will kill the cells that don’t have the immunity sequence included in the Cas9 package.) and don’t mention specific transduction efficiency numbers, but I am guessing it is not on the order of 50%. At most I would hope it is as high as 5%.
All of this does not account for the other difficulties of passive immunity of gene editing in vivo.
Why aren’t others doing this?
I think I can help answer this question. The short answer is that they are, but they are doing it in much smaller steps. Rather than going straight for the holy grail of editing an organ as large and complex as the brain, they are starting with cell types and organs that are much easier to make edits to.
This Paper is the most recent publication I can find on in-vivo gene editing, and it discusses many of the strategies you have highlighted here. In this case, they are using Lipid Nano-Particles to target specifically the epithelial cells of the lungs to edit out the genetic defect that causes Cystic Fibrosis. This is a much smaller and more attainable step for a lot of reasons, the biggest one being that they only need to attain a very low transduction efficiency to have a highly measurable impact on the health of the mice they were testing on. It is also fairly acceptable to have a relatively high rate of cell death in epithelial cells since they replace themselves very rapidly. In this case, their highest transduction efficiency was estimated to be as high as 2.34% in-vivo, with a sample size of 8 mice.
We may be able to quickly come up with at least one meaningful gene target that could make a difference with 2.34% transduction efficiency, but be aware that delivering this at scale to a human brain will be MUCH harder than doing so with mouse epithelial cells.
Again, I don’t want to discourage this project. I would really like to help, actually. I want to be realistic about the challenges here, and there is a reason why the equilibria is where it is.
Thanks for leaving such a high quality comment. I’m sorry for taking so long to get back to you.
We fully expect bringing this to market to take tens of millions of dollars. My best guess was $20-$40 million.
My biggest concern is your step 1:
“Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits.”
And translating that into step 2:
“Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal.”
I would like to hear more about your research into approaching this problem, but without more information, I am concerned you may be underestimating the difficulty of successfully delivering genetic material to any significant number of cells.
We expect this to be difficult, but we DON’T expect to have to solve the delivery problem entirely on our own. There are significant incentives for existing companies such as Dyno Therapeutics to solve the problem of delivering genes (or other payloads) to the nucleus of brain cells. In fact, Dyno already has a product, Dyno bCap 1 which successfully delivered genes to between 5% and 20% of brain cells in non-human primates.
Obviously we will need higher efficiencies than that to perform edits for polygenic brain diseases or intelligence, but the ease of delivering payloads to brain cells has been gradually improving over the years and I expect it to continue doing so.
There are of course some issues:
I know from some conversations with a former employee of Dyno that the capsids can be customized to be serologically distinct so that any antibodies formed in response to one round of treatment will not destroy the capsids used in the second round. But I am still waiting to hear back from them regarding the cost and time required to do this sort of customization.
Custom AAVs are also quite expensive per dose, largely due to the cost of reagents and other basic supplies that no one has figured out how to make cheaper yet. So it’s a plausible delivery mechanism, but far from ideal.
Still, the fact that there is an existing product on the marketplace which can get custom DNA payloads into the nuclei of brain cells gives me hope that someone else will have made major headway on the delivery problem by the time we are ready for trials in cows or non-human primates.
In the meantime, we simply need a way to get editors into HEK cells and brain cells efficiently in cell cultures to test multiplex editing approaches. I’m sure this will pose its own set of challenges, but given dozens of labs have done this I don’t expect that to be infeasible.
Send me an email if you have time to chat about this. If you’re willing I’d like to pick your brain more about other aspects of the project.
My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD.
Are you doing traditional gene therapy or CRISPR-based editing?
If the former, I’d guess you’re using Lentivirus because you want genome integration?
Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and that is with the help of chemicals like Polybrene, which significantly increases viral uptake and is not an option for in-vivo editing.
Does this refer to the proportion of the remaining cells which had successful edits / integration of donor gene? Or the number that were transfected at all (in which case how is that measured)?
Essentially, in order to make this work for in-vivo gene editing of an entire organ (particularly the brain), you need your transduction efficiency to be at least 2-3 orders of magnitude higher than the current technologies allow on their own just to make up for the lack of polybrene/retronectin in order to hit your target 50%.
This study achieved up to 59% base editing efficiency in mouse cortical tissue, while this one achieved up to 42% prime editing efficiency (both using a dual AAV vector). These contributed to our initial optimism that the delivery problem wasn’t completely out of reach. I’m curious what you think of these results, maybe there’s some weird caveat I’m not understanding.
The short answer is that they are, but they are doing it in much smaller steps. Rather than going straight for the holy grail of editing an organ as large and complex as the brain, they are starting with cell types and organs that are much easier to make edits to.
This is my belief as well—though the dearth of results on multiplex editing in the literature is strange. E.g. why has no one tried making 100 simultaneous edits at different target sequences? Maybe it’s obvious to the experts that the efficiency would be to low to bother with?
I am a Research Associate and Lab Manager in a CAR-T cell Research Lab (email me for credentials specifics), and I find the ideas here very interesting. I will email GeneSmith to get more details on their research, and I am happy to provide whatever resources I can to explore this possibility.
TLDR;
Making edits once your editing system is delivered is (relatively) easy. Determining which edits to make is (relatively) easy. (Though you have done a great job with your research on this, I don’t want to come across as discouraging.) Delivering gene editing mechanisms in-vivo, with any kind of scale or efficiency, is HARD.
I still think it may be possible, and I don’t want to discourage anyone from exploring this further. I think the resources and time required to bring this to anything close to clinical application will be more than you are expecting. Probably on the order of 10-20 years, at least many millions (5-10 million?) of USD, in order to get enough data to prove the concept in mice. That may sound like a lot, but I am honestly not sure if I am being appropriately pessimistic. You may be able to advance that timescale with significantly more funding, but only to a point.
Long Version:
My biggest concern is your step 1:
“Determine if it is possible to perform a large number of edits in cell culture with reasonable editing efficiency and low rates of off-target edits.”
And translating that into step 2:
“Run trials in mice. Try out different delivery vectors. See if you can get any of them to work on an actual animal.”
I would like to hear more about your research into approaching this problem, but without more information, I am concerned you may be underestimating the difficulty of successfully delivering genetic material to any significant number of cells.
My lab does research specifically on in vitro gene editing of T-cells, mostly via Lentivirus and electroporation, and I can tell you that this problem is HARD. Even in-vitro, depending on the target cell type and the amount/ it is very difficult to get transduction efficiencies higher than 70%, and that is with the help of chemicals like Polybrene, which significantly increases viral uptake and is not an option for in-vivo editing.
When we are discussing transductions, or the transfer of genetic material into a cell, efficiency measures the percentage of cells we can successfully clone a gene into.
Even when we are trying to deliver relatively small genes, we have to use a lot of tricks to get reasonable transduction efficiencies like 70% for actual human T-Cells. We Might use a very high concentration of virus, polybrene, retronectin (a protein that helps latch viruses onto a cell) and centrifuge the cells to force them into contact with the virus/retronectin.
On top of that, when we are getting transduction efficiencies of 70%, that is of the remaining cells. A significant number of the target cells will die due to the stress of viral load. I don’t know for sure how many, but I have always been told that it is typically between 30% and 70% of the cells, and the more viruses you use or, the higher transduction efficiency you go for, the more it will tend toward a higher percentage of those cells dying.
Some things to keep in mind are:
These estimates all use Lentivirus, which is considered a more efficient and less dangerous vector than AAV, mostly because it has been better studied and used.
This is all in vitro; in vivo, specialized defenses in your immune system exist to prevent the spread of viral particles. Injections of Viruses need to be localized, and you can probably only use the sort of virus that does NOT reproduce itself; otherwise, it can cause a destructive infection wherever you put it.
Your brain cannot survive 30%+ cell death. It probably can’t survive 5% cell death unless you do very small areas at a time. These transductions may have to happen for every gene edit you want to make based solely on currently available technology.
Mosaicism is probably not a problem, but keep in mind that there is a selective effect since cases where it is a problem are selected out of your observations since if they are destructive, they won’t be around to be observed. This, of course, would be easily tested out.
Essentially, in order to make this work for in-vivo gene editing of an entire organ (particularly the brain), you need your transduction efficiency to be at least 2-3 orders of magnitude higher than the current technologies allow on their own just to make up for the lack of polybrene/retronectin in order to hit your target 50%. The difference in using Polybrene and Retronectin/Lenti-boost is the difference between 60% transduction efficiency and 1%. You may be able to find non-toxic alternatives to polybrene, but this is not an easy problem, and if you do find something like that, it is worth a ton of money and/or publication credit on its own.
I don’t want to be discouraging here; however, it is important to understand the problem’s real scope.
At a glance, I would say the adenoviral approach is the most likely to work for the scale of transduction you are looking to accomplish. After a quick search, I found these two studies to be the most promising, discussing the deployment of CRISPR/Cas9 systems via AAV. Both use hygromycin B selection (a process whereby cells that were not transduced are selected out since hygromycin will kill the cells that don’t have the immunity sequence included in the Cas9 package.) and don’t mention specific transduction efficiency numbers, but I am guessing it is not on the order of 50%. At most I would hope it is as high as 5%.
All of this does not account for the other difficulties of passive immunity of gene editing in vivo.
Why aren’t others doing this?
I think I can help answer this question. The short answer is that they are, but they are doing it in much smaller steps. Rather than going straight for the holy grail of editing an organ as large and complex as the brain, they are starting with cell types and organs that are much easier to make edits to.
This Paper is the most recent publication I can find on in-vivo gene editing, and it discusses many of the strategies you have highlighted here. In this case, they are using Lipid Nano-Particles to target specifically the epithelial cells of the lungs to edit out the genetic defect that causes Cystic Fibrosis. This is a much smaller and more attainable step for a lot of reasons, the biggest one being that they only need to attain a very low transduction efficiency to have a highly measurable impact on the health of the mice they were testing on. It is also fairly acceptable to have a relatively high rate of cell death in epithelial cells since they replace themselves very rapidly. In this case, their highest transduction efficiency was estimated to be as high as 2.34% in-vivo, with a sample size of 8 mice.
We may be able to quickly come up with at least one meaningful gene target that could make a difference with 2.34% transduction efficiency, but be aware that delivering this at scale to a human brain will be MUCH harder than doing so with mouse epithelial cells.
Again, I don’t want to discourage this project. I would really like to help, actually. I want to be realistic about the challenges here, and there is a reason why the equilibria is where it is.
Thanks for leaving such a high quality comment. I’m sorry for taking so long to get back to you.
We fully expect bringing this to market to take tens of millions of dollars. My best guess was $20-$40 million.
We expect this to be difficult, but we DON’T expect to have to solve the delivery problem entirely on our own. There are significant incentives for existing companies such as Dyno Therapeutics to solve the problem of delivering genes (or other payloads) to the nucleus of brain cells. In fact, Dyno already has a product, Dyno bCap 1 which successfully delivered genes to between 5% and 20% of brain cells in non-human primates.
Obviously we will need higher efficiencies than that to perform edits for polygenic brain diseases or intelligence, but the ease of delivering payloads to brain cells has been gradually improving over the years and I expect it to continue doing so.
There are of course some issues:
I know from some conversations with a former employee of Dyno that the capsids can be customized to be serologically distinct so that any antibodies formed in response to one round of treatment will not destroy the capsids used in the second round. But I am still waiting to hear back from them regarding the cost and time required to do this sort of customization.
Custom AAVs are also quite expensive per dose, largely due to the cost of reagents and other basic supplies that no one has figured out how to make cheaper yet. So it’s a plausible delivery mechanism, but far from ideal.
Still, the fact that there is an existing product on the marketplace which can get custom DNA payloads into the nuclei of brain cells gives me hope that someone else will have made major headway on the delivery problem by the time we are ready for trials in cows or non-human primates.
In the meantime, we simply need a way to get editors into HEK cells and brain cells efficiently in cell cultures to test multiplex editing approaches. I’m sure this will pose its own set of challenges, but given dozens of labs have done this I don’t expect that to be infeasible.
Send me an email if you have time to chat about this. If you’re willing I’d like to pick your brain more about other aspects of the project.
Really interesting, thanks for commenting.
Are you doing traditional gene therapy or CRISPR-based editing?
If the former, I’d guess you’re using Lentivirus because you want genome integration?
If the latter, why not use Lipofectamine?
How do you use electroporation?
Does this refer to the proportion of the remaining cells which had successful edits / integration of donor gene? Or the number that were transfected at all (in which case how is that measured)?
This study achieved up to 59% base editing efficiency in mouse cortical tissue, while this one achieved up to 42% prime editing efficiency (both using a dual AAV vector). These contributed to our initial optimism that the delivery problem wasn’t completely out of reach. I’m curious what you think of these results, maybe there’s some weird caveat I’m not understanding.
This is my belief as well—though the dearth of results on multiplex editing in the literature is strange. E.g. why has no one tried making 100 simultaneous edits at different target sequences? Maybe it’s obvious to the experts that the efficiency would be to low to bother with?