Yes, as someone who has worked both in CS and in neuroscience at the graduate level, I probably do know far more than you about this topic. At the risk of sounding more polemic than I am, it’s posts like yours and others that make excessively reductive inferences about neurons and the brain that invariably end up polluting discussions of this topic and turn it into an unproductive cesspool of ML people offering their two-cents for topics they don’t understand (most of the replies to the original post being the cases in point).
I will grant you that it is indeed possible that we don’t understand enough about the brain to be confident that we won’t just irreversibly ruin test subjects’ brains with such a therapy, but this is much less likely than the possibility that either nothing will happen or that gains will be had, provided such a therapy is successfully developed in a way that makes it feasible. Geneticists and neuroscientists have not been doing nothing for the past century; we understand much more about the brain, neurons, and cell biology than we understand about artificial neural networks, which is why we can be confident that things like this are possible if we can overcome the obstacles in engineering the delivery and editing mechanisms. There is also no reason to get it right on the first try; GeneSmith has sadly had to state the obvious several times in response to others, which is that we have the scientific method and it would take many experiments to get such a therapy right if it is at all possible. Regardless, I don’t think this therapy as OP describes it is possible for reasons that have already been stated by HiddenPrior and other reasons, but not for the inane reasons others have suggested that liken the adult brain to an immutable pre-trained neural net. It will take several breakthroughs in the field before reliable multiplex genome editing in the human central nervous system becomes a reality.
You are right however that without more GWASes, it will likely be impossible to extricate intelligence enhancements and changes to things like one’s psychology and predisposition to psychiatric disorders. It is even possible that these end up being inextricable to an extent, and recipients of this therapy would have to accept sustaining some degree of not only personality change but other introspective “updates” and disease risk changes. This is one aspect of this therapy that the OP has been rather naïve about in the post and replies to others. If a gene’s influence is such that it affects as emergent and complex a trait as intelligence, it is reasonable to suspect that it affects other things. This is demonstrable with GWASes (and has been demonstrated), but the silver lining is that allele flips that enhance intelligence tend to confer positive rather than negative changes to other systems (key word here being “tend”). As far as my personal preference goes, I would gladly accept a different personality if it meant having an IQ of 190+ or something; nonetheless, there’s no reason to believe personality isn’t amenable to the same techniques used to alter intelligence.
I will grant you that it is indeed possible that we don’t understand enough about the brain to be confident that we won’t just irreversibly ruin test subjects’ brains with such a therapy, but this is much less likely than the possibility that either nothing will happen or that gains will be had, provided such a therapy is successfully developed in a way that makes it feasible.
The bit about the personality was specifically in response to the idea that you could revert brains to childhood-like plasticity. That’s like an additional layer of complexity and unlike gene therapy we don’t know how to begin doing that, so if you ask me, I don’t think it would actually be a thing anyway in the near future. My guess is: most of your intelligence, even the genetic component, is probably determined by development during the phase of highest plasticity. So if you change the genes later you’ll either get no effect or marginal ones compared to what would happen if you changed them in embryos—that is, if it doesn’t also cause other weird side effects.
Experiments are possible but I doubt they’d be risk-free, or honestly, even approved by an ethical committee at all, as things are now. It’s a high risk for a goal that would probably be deemed in itself ethically questionable. And the study surviving for example a cohort “gone bad” would be really hard in terms of public support and funding.
Yes, as someone who has worked both in CS and in neuroscience at the graduate level, I probably do know far more than you about this topic. At the risk of sounding more polemic than I am, it’s posts like yours and others that make excessively reductive inferences about neurons and the brain that invariably end up polluting discussions of this topic and turn it into an unproductive cesspool of ML people offering their two-cents for topics they don’t understand (most of the replies to the original post being the cases in point).
I will grant you that it is indeed possible that we don’t understand enough about the brain to be confident that we won’t just irreversibly ruin test subjects’ brains with such a therapy, but this is much less likely than the possibility that either nothing will happen or that gains will be had, provided such a therapy is successfully developed in a way that makes it feasible. Geneticists and neuroscientists have not been doing nothing for the past century; we understand much more about the brain, neurons, and cell biology than we understand about artificial neural networks, which is why we can be confident that things like this are possible if we can overcome the obstacles in engineering the delivery and editing mechanisms. There is also no reason to get it right on the first try; GeneSmith has sadly had to state the obvious several times in response to others, which is that we have the scientific method and it would take many experiments to get such a therapy right if it is at all possible. Regardless, I don’t think this therapy as OP describes it is possible for reasons that have already been stated by HiddenPrior and other reasons, but not for the inane reasons others have suggested that liken the adult brain to an immutable pre-trained neural net. It will take several breakthroughs in the field before reliable multiplex genome editing in the human central nervous system becomes a reality.
You are right however that without more GWASes, it will likely be impossible to extricate intelligence enhancements and changes to things like one’s psychology and predisposition to psychiatric disorders. It is even possible that these end up being inextricable to an extent, and recipients of this therapy would have to accept sustaining some degree of not only personality change but other introspective “updates” and disease risk changes. This is one aspect of this therapy that the OP has been rather naïve about in the post and replies to others. If a gene’s influence is such that it affects as emergent and complex a trait as intelligence, it is reasonable to suspect that it affects other things. This is demonstrable with GWASes (and has been demonstrated), but the silver lining is that allele flips that enhance intelligence tend to confer positive rather than negative changes to other systems (key word here being “tend”). As far as my personal preference goes, I would gladly accept a different personality if it meant having an IQ of 190+ or something; nonetheless, there’s no reason to believe personality isn’t amenable to the same techniques used to alter intelligence.
The bit about the personality was specifically in response to the idea that you could revert brains to childhood-like plasticity. That’s like an additional layer of complexity and unlike gene therapy we don’t know how to begin doing that, so if you ask me, I don’t think it would actually be a thing anyway in the near future. My guess is: most of your intelligence, even the genetic component, is probably determined by development during the phase of highest plasticity. So if you change the genes later you’ll either get no effect or marginal ones compared to what would happen if you changed them in embryos—that is, if it doesn’t also cause other weird side effects.
Experiments are possible but I doubt they’d be risk-free, or honestly, even approved by an ethical committee at all, as things are now. It’s a high risk for a goal that would probably be deemed in itself ethically questionable. And the study surviving for example a cohort “gone bad” would be really hard in terms of public support and funding.
Can you elaborate on this? We’d really appreciate the feedback.
I posted my reply to this as a direct reply to the OP because I think it’s too huge and elaborate to keep hidden here.