While it’s pausible that there will be a future where that’s cheaper it’s currently 9-figures for synthecizing a human genome from scratch. Whether or not there will be a time where that’s cheaper then more targeted modifications is very open.
In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it’s easier to create a genome from scratch.
I don’t see why that’s should be the case. I see no principle reasons why you shouldn’t be able to scale up targeted changes to do 20,000 changes (~number of human genes).
If you want to zero out mutational load and create a modal genome, you’re approximately 2 orders of magnitude off in the number of edits you need to do. (The number of evolutionarily-conserved protein-coding regions has little to do with the number of total variants across the billions of basepairs in a specific human’s genome.) Considering that it is unlikely that we will get base editors, ever, which have total error rates approaching 1 in millions, one will have to be a little more clever about how one goes about it. (Maybe some sort of multi-generational mass mutagenesis-editing / screening loop?)
Anyway, I would point out that you can do genome synthesis on a much cheaper scale than whole-genome: whole-chromosome is an obvious intermediate point which would be convenient to swap out. And for polygenic traits, optimizing a single chromosome might push the phenotype out as far as you want to go in a single generation anyway.
An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.
While it’s pausible that there will be a future where that’s cheaper it’s currently 9-figures for synthecizing a human genome from scratch. Whether or not there will be a time where that’s cheaper then more targeted modifications is very open.
In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it’s easier to create a genome from scratch.
I agree it’s an open question though!
I don’t see why that’s should be the case. I see no principle reasons why you shouldn’t be able to scale up targeted changes to do 20,000 changes (~number of human genes).
If you want to zero out mutational load and create a modal genome, you’re approximately 2 orders of magnitude off in the number of edits you need to do. (The number of evolutionarily-conserved protein-coding regions has little to do with the number of total variants across the billions of basepairs in a specific human’s genome.) Considering that it is unlikely that we will get base editors, ever, which have total error rates approaching 1 in millions, one will have to be a little more clever about how one goes about it. (Maybe some sort of multi-generational mass mutagenesis-editing / screening loop?)
Anyway, I would point out that you can do genome synthesis on a much cheaper scale than whole-genome: whole-chromosome is an obvious intermediate point which would be convenient to swap out. And for polygenic traits, optimizing a single chromosome might push the phenotype out as far as you want to go in a single generation anyway.