Thanks for the response! Very helpful and enlightening.
The reason for this is actually pretty simple: genes with linear effects have an easier time spreading throughout a population.
This is interesting—I have never come across this. Can you expand the intuition of this model a little more? Is the intuition something like in the fitness landscape genes with linear effects are like gentle slopes that are easy to traverse vs extremely wiggly ‘directions’?
Also how I am thinking about linearity is maybe slightly different to the normal ANOVA/factor analysis way, I think. I.e. let’s suppose that we have some protein which is good so that more of it is better and we have 100 different genes which can either upregulate or down regulate it. However, at some large number, say 80x the usual amount, the benefit saturates. So a normal person is very unlikely to have 80⁄100 positive variants but if we go in and edit all 100 to be positive, we only get the maximum benefit far below what we would have predicted since it maxes out at 80. I guess to detect this nonlinearity in a normal population you basically need to get an 80+th order interaction of all of them interacting in just the right way which is exceedingly unlikely. Is this your point about sample size?
I’ll talk about this in more detail within the post, but yes we have examples of monogenic diseases and cancers being cured via gene therapy.
This is very cool. Are the cancer cures also monogenic? Has anybody done any large scale polygenic editing in mice or any other animal before humans? This seems the obvious place to explicitly test the causality and linearity directly. Are we bottlenecked on GWAS equivalents for other animals?
Also how I am thinking about linearity is maybe slightly different to the normal ANOVA/factor analysis way, I think. I.e. let’s suppose that we have some protein which is good so that more of it is better and we have 100 different genes which can either upregulate or down regulate it. However, at some large number, say 80x the usual amount, the benefit saturates. So a normal person is very unlikely to have 80⁄100 positive variants but if we go in and edit all 100 to be positive, we only get the maximum benefit far below what we would have predicted since it maxes out at 80. I guess to detect this nonlinearity in a normal population you basically need to get an 80+th order interaction of all of them interacting in just the right way which is exceedingly unlikely. Is this your point about sample size?
The way I think about the sample sizes needed to identify non-linear effects is more like this: if you’re testing the hypothesis that A_i has an effect on trait T but only in the presence of another gene B_k you need a large sample of patients with both A_i and B_k. If both variants are rare, that can multiply the sample size needed to reach genome-wide significance by a factor of 10 or even 100.
This is very cool. Are the cancer cures also monogenic?
The ones I know of are.
If this tech works, it’s hard to understate just how big the impact would be, especially if you could target edits to just a specific cell type (which was been done in a limited capacity already). If you had high enough editing efficiency, you could probably bring a cancer suvivor’s recurrence risk back to a pre-cancerous state by identifying the specific mutations that made the cells of that particular organ pre-cancerous and reverting them back to their original state. You could even make their cancer risk lower than it was before by adjusting their polygenic risk score for cancer.
I really don’t think I can oversell how transformative this tech would be if it actually worked well. You could probably dramatically extend the human healthspan, make people smarter, and do all kinds of other things.
There are of course ways it could be used that would be concerning. For example, a really determined government might be able to make a genetic predictor for obedience or something and modify people’s polygenic scores for obedience. On the other hand, you could probably use that same technology to reduce the risk of violent criminals reoffending, which could be good.
I tend not to think too much about these kinds of concerns because the situation with AI seems so dire. But if by some miracle we pass a global moratorium on hardware improvement to buy ourselves more time to figure out solutions to alignment and misuse concerns, this tech could play a hugely pivotal role in that. Not to mention all the more down-to-earth stuff it could do for diseases, mental disorders, suffering, and general quality of life.
There’s so many cool ideas we could test if we were braver. Stuff like the variation of p53 in elephants which is a nonlethal variant of which your cells can safely have extra copies alongside the normal lethal variant to reduce cancer likelihood without negative side-effects of increasing undesired cell death.
Or the function of seal glia cells in taking on some of the metabolic load of the neurons by having mitochondria to regenerate ATP and shipping the resulting ATP over to the neurons.
Thanks for the response! Very helpful and enlightening.
This is interesting—I have never come across this. Can you expand the intuition of this model a little more? Is the intuition something like in the fitness landscape genes with linear effects are like gentle slopes that are easy to traverse vs extremely wiggly ‘directions’?
Also how I am thinking about linearity is maybe slightly different to the normal ANOVA/factor analysis way, I think. I.e. let’s suppose that we have some protein which is good so that more of it is better and we have 100 different genes which can either upregulate or down regulate it. However, at some large number, say 80x the usual amount, the benefit saturates. So a normal person is very unlikely to have 80⁄100 positive variants but if we go in and edit all 100 to be positive, we only get the maximum benefit far below what we would have predicted since it maxes out at 80. I guess to detect this nonlinearity in a normal population you basically need to get an 80+th order interaction of all of them interacting in just the right way which is exceedingly unlikely. Is this your point about sample size?
This is very cool. Are the cancer cures also monogenic? Has anybody done any large scale polygenic editing in mice or any other animal before humans? This seems the obvious place to explicitly test the causality and linearity directly. Are we bottlenecked on GWAS equivalents for other animals?
The way I think about the sample sizes needed to identify non-linear effects is more like this: if you’re testing the hypothesis that A_i has an effect on trait T but only in the presence of another gene B_k you need a large sample of patients with both A_i and B_k. If both variants are rare, that can multiply the sample size needed to reach genome-wide significance by a factor of 10 or even 100.
The ones I know of are.
If this tech works, it’s hard to understate just how big the impact would be, especially if you could target edits to just a specific cell type (which was been done in a limited capacity already). If you had high enough editing efficiency, you could probably bring a cancer suvivor’s recurrence risk back to a pre-cancerous state by identifying the specific mutations that made the cells of that particular organ pre-cancerous and reverting them back to their original state. You could even make their cancer risk lower than it was before by adjusting their polygenic risk score for cancer.
I really don’t think I can oversell how transformative this tech would be if it actually worked well. You could probably dramatically extend the human healthspan, make people smarter, and do all kinds of other things.
There are of course ways it could be used that would be concerning. For example, a really determined government might be able to make a genetic predictor for obedience or something and modify people’s polygenic scores for obedience. On the other hand, you could probably use that same technology to reduce the risk of violent criminals reoffending, which could be good.
I tend not to think too much about these kinds of concerns because the situation with AI seems so dire. But if by some miracle we pass a global moratorium on hardware improvement to buy ourselves more time to figure out solutions to alignment and misuse concerns, this tech could play a hugely pivotal role in that. Not to mention all the more down-to-earth stuff it could do for diseases, mental disorders, suffering, and general quality of life.
There’s so many cool ideas we could test if we were braver. Stuff like the variation of p53 in elephants which is a nonlethal variant of which your cells can safely have extra copies alongside the normal lethal variant to reduce cancer likelihood without negative side-effects of increasing undesired cell death.
Or the function of seal glia cells in taking on some of the metabolic load of the neurons by having mitochondria to regenerate ATP and shipping the resulting ATP over to the neurons.