I’m a software developer by training with an interest in genetics. I am currently doing independent research on gene therapy with an emphasis on intelligence enhancement.
GeneSmith
I think the response to 9/11 was an outlier mostly caused by the “photogenic” nature of the disaster. COVID killed over a million Americans yet we basically forgot about it once it was gone. We haven’t seen much serious investment in measures to prevent a new pandemic.
Seems like the only thing that could stop the train at this point is a few tens or hundreds of millions of deaths from out of control AI. Doesn’t seem like anyone in government wants to cooperate to reduce the risk of everyone dying. Both the US and China have individually decided to roll the dice on creating machines they don’t understand and may not be able to control.
I really should have done a better job explaining this in the original comment; it’s not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same “direction” within some higher dimensional vector space, you’ll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains.
I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you’re probably going to get one generation of enhanced humans before we make the machine god.
For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is?
I think it’s neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc)
I plan to leave those harder questions to the next generation. It’s enough to just go after the really easy wins.
additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.
Manipulation of others by enhanced humans is somewhat of a concern, but I don’t think it’s for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one’s will is a common and time-honored tactic to make that happen.
In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editing, but the issue is personality traits have a unique genetic structure with lots of non-linear interactions. That means you need larger sample sizes to figure out what genes need editing.
I can’t really speak to your specific experience too well other than to simply say I’m sorry you had to go through that. We actually see that in general, mental health prevalence actually declines with increasing IQ. The one exception to this is aspbergers.
I do think it’s going to be very important to address mental health issues as well. Many mental health conditions are reasonably editable; we could reduce the prevalence of some by 50%+ with editing.
Oops, thanks for the correction
It’s not a dumb question. It’s a fair concern.
I think the main issue with chickens is not that faster growth is inevitably correlated with health problems, but that chicken breeders are happy to trade off a good amount of health for growth so long as the chicken doesn’t literally die of health problems.
You can make different trade-offs! We could have fast-growing chickens with much better health if breeders prioritized chicken health more highly.
How would a limbic system handle that much processing power: I’m not sure it would be able to. How deep of a sense of existential despair and terror might that mind feel?
I don’t think you’ll need to worry about this stuff until you get really far out of distribution. If you’re staying anywhere near the normal human distribution you should be able to do pretty simple stuff like select against the risk of mental disorders while increasing IQ.
Well, we can just see empirically that linear models predict outliers pretty well for existing traits. For example, here’s a graph the polygenic score for Shawn Bradley, a 7′6″ former NBA player. He does indeed show up as a very extreme data point on the graph:
I think your general point stands: if we pushed far enough into the tails of these predictors, the actual phenotypes would almost certainly diverge from the predicted phenotypes. But the simple linear models seem to hold quite well eithin the existing human distribution.
I should probably clarify; it’s not clear that we could create someone with an IQ of 1700 in a meaningful sense. There is that much additive variance, sure. But as you rightly point out, we’re probably going to run into pretty serious constraints before that (size of the birth canal being an obvious one, metabolic constraints being another)
I suspect that to support someone even in the 300 range would require some changes to other aspects of human nature.
The main purpose of making this post was simly to point out that there’s a gigantic amount of potential within the existing human gene pool to modify traits in desirable ways. Enough to go far, far beyond the smartest people that have ever lived. And that if humans decide they want it, this is in fact a viable path towards an incredibly bright almost limitless future that doesn’t require building a (potentially) uncontrollable computer god.
Any reason the 4M$ isn’t getting funded?
No one with the money has offered to fund it yet. I’m not even sure they’re aware this is happening.
I’ve got a post coming out about this in the next few weeks, so I’m hoping that leads to some kind of focus on this area.
OpenPhil used to fund stuff like this but they’ve become extremely image conscious in the aftermath of the FTX blow up. So far as I can tell they basically stay safely inside of norms that are acceptable to the democratic donor class now.
This is why I wrote a blog about enhancing adult intelligence at the end of 2023; I thought it was likely that we wouldn’t have enough time.
I’m just going to do the best I can to work on both these things. Being able to do a large number of edits at the same time is one of the key technologies for both germline and adult enhancement, which is what my company has been working on. And though it’s slow, we have made pretty significant progress in the last year including finding several previously unknown ways to get higher editing efficiency.
I still think the most likely way alignment gets solved is just smart people working on it NOW, but it would sure be unfortunate if we DO get a pause and no one has any game plan for what to do with that time.
No, I mean 1700. There are literally that many variants. On the order of 20,000 or so.
You’re correct of course that if we don’t see some kind of pause, gene editing is probably not going to help.
But you don’t need a multi-generational one for it to have a big effect. You could create people smarter than any that have ever lived in a single generation.
(I believe that WBE can get all the way to a positive singularity—a group of WBE could self optimize, sharing the latest HW as it became available in a coordinated fashion so no-one or group would get a decisive advantage. This would get easier for them to coordinate as the WBE got more capable and rational.)
Maybe, but my impression is whole brain emulation is much further out technologically speaking than gene editing. We already have basically all the tools necessary to do genetic enhancement except for a reliable way to convert edited cells into sperm, eggs, or embryos. Last I checked we JUST mapped the neuronal structure of fruit flies for the first time last year and it’s still not enough to recreate the functionality of the fruit fly because we’re still missing the connectome.
Maybe some alternative path like high fidelity fMRI will yield something. But my impression is that stuff is pretty far out.
I also worry about the potential for FOOM with uploads. Genetically engineered people could be very, very smart, but they can’t make a million copies of themselves in a few hours. There are natural speed limits to biology that make it less explosive than digital intelligence.
Why should you believe an IQ 200 can control 400 any more than IQ 80 could control 200? (And if you believe gene editing can get IQ 600, then you must believe the AI can self optimize well above that. However I think there is almost no chance you will get that high because diminishing returns, correlated changes etc)
The hope is of course that at some point of intelligence we will discover some fundamental principles that give us confidence our current alignment techniques will extrapolate to much higher levels of intelligence.
Additionally there is unknown X and S risk from a multi-generational pause with our current tech. Once a place goes bad like N Korea, then tech means there is likely no coming back. If such centralization is a one way street, then with time an every larger % of the world will fall under such systems, perhaps 100%.
This is an interesting take that I hadn’t heard before, but I don’t really see any reason to think our current tech gives a big advantage to autocracy. The world has been getting more democratic and prosperous over time. There are certainly local occasional reversals, but I don’t see any compelling reason to think we’re headed towards a permanent global dictatorship with current tech.
I agree the risk of a nuclear war is still concerning (as is the risk of an engineered pandemic), but these risks seemed dwarfed by those presented by AI. Even if we create aligned AGI, the default outcome IS a global dictatorship, as the economic incentives are entirely pointed towards aligning it with its creators and controllers as opposed to the rest of humanity.
It’s probably worth noting that there’s enough additive genetic variance in the human gene pool RIGHT NOW to create a person with a predicted IQ of around 1700.
You’re not going to be able to do that in one shot due to safety concerns, but based on how much we’ve been able to influence traits in animals through simple selective breeding, we ought to be able to get pretty damn far if we are willing to do this over a few generations. Chickens are literally 40 standard deviations heavier than their wild-type ancestors, and other types of animals are tens of standard deviations away from THEIR wild-type ancestors in other ways. A human 40 standard deviations away from natural human IQ would have an IQ of 600.
Even with the data we have TODAY we could almost certainly make someone in the high 100s to low 200s just with gene editing an a subset of the not-all-that-great IQ test data we’ve already collected:
If one of the big government biobanks just allows the data that has ALREADY BEEN COLLECTED to be used to create an IQ predictor, we could nearly double the expected gain (in fact, we would more than double it for higher numbers of edits)
All we need is time. In my view it’s completely insane that we’re rolling the dice on continued human existence like this when we will literally have human supergeniuses walking around in a few decades.
The biggest bottleneck for this field is a reliably technique to convert a stem cell to an embryo. There’s a very promising project that might yield a workable technique to do that and the guy who wants to run it can’t because he doesn’t have $4 million to do primate testing (despite the early signs showing it will pretty plausibly work).
If we have time, human genetic engineering literally is the solution to the alignment problem. We are maybe 5-8 years out from being able to make above-average altruism, happy, healthy supergeniuses and instead of waiting a few more decades for those kids to grow up, we’ve collectively decided to roll the dice on making a machine god.
We have this incredible situation right now where the US government is spending tens of billions of dollars on infrastructure designed to make all of its citizens obsolete, powerless and possibly dead, yet won’t even spend a few million on research to make humans better.
It’s a good question. This question is hard to answer because we don’t have great data on which genes matter in which tissues.
But we do have decent proxies; for example we have OK RNA sequencing data from different bodily tissues, and we could probably use that to figure out in which tissues proteins associated with genetic variants known to affect intelligence are most heavily expressed.
I haven’t thought this to be a huge priority yet because some early data I saw indicated that most of the genetic variants acted primarily through the brain. And given that we probably couldn’t yet raise IQ more than a standard deviation even if we had both great brain delivery and better gene editors, it just didn’t seem like a high priority yet.
I’ve got a post coming out about this in the next month. It’s not as either/or as you might think.
I don’t really see any reason why you couldn’t just do a setwise comparison and check which of the extraversion increasing variants (or combinations of variants if epistatic effects dominate) increase the trait without increasing conformity to social desirability.
In fact if you just select for disagreeableness as well that might just fix the problem.
The key distinction is that IQ demonstrates a robust common pathway structure—different cognitive tests correlate with each other because they’re all tapping into a genuine underlying cognitive ability. In contrast, personality measures often fail common pathway tests, suggesting that the correlations between different personality indicators might arise from multiple distinct sources rather than a single underlying trait. This makes genetic selection for personality traits fundamentally different from selecting for IQ—not just in terms of optimal selection strength, but in terms of whether we can meaningfully select for the intended trait at all.
There is such a thing as a “general factor of personality”. I’m not sure how you can say that the thing IQ is measuring is real while the general factor of personality isn’t.
Sure big 5 aren’t the end-all be-all of personality but they’re decent and there’s no reason you couldn’t invent a more robust measure for the purpose of selection.
I think we would probably want to select much less hard on personality than on IQ. For virtually any one of the big five personality traits there is obviously a downside to becoming too extreme. For IQ that’s not obviously the case.
Paper 1 is pretty interesting and is related to one of the methods of brain delivery I’ve looked at before.
I’m not sure we really want to have a lot of T cells floating around inside the central nervous system, but a friend of mine who spent a few days looking into this earlier this year thought we might be able to repurpose the same tech with synthetic circuits to use microglia cells in the brain to do delivery.
Microglia (and to a lesser extent oligodendrocytes and astrocytes) naturally migrate around the brain. If you could get them to deliver RNA, DNA or RNP payloads to cells when they encounter a tissue-specific cell surface receptor, that might actually solve the brain delivery issue better than any other method.
This would take a lot of work to develop, but if you managed to get it working you could potentially continuously dose a patient’s brain with an arbitrary protein for an indefinite time period. That could be pretty damn valuable.
Alternatively you might be able to use a conditionally activated system like Tet-On to temporarily turn on expression of gene editors or gene editor RNA for some time period to do editing.
Yes. Once this tech works you can use it for basically anything so long as you can make enough edits and the genes involved are known.
So we really don’t need to make any special considerations for memory.
We will be hiring fairly soon. Reach out to me genesmithlesswrong@gmail.com
I’m almost certainly somewhat of an outlier, but I am very excited about having 3+ children. My ideal number is 5 (or maybe more if I become reasonably wealthy). My girlfriend is also on board.
I just can’t picture anything more joyous in a normal life (i.e. excluding upload enabled perma-jhana) than finding someone I deeply love and combining ourselves to make new people. It’s a miracle that’s even possible! If this wasn’t a normal part of everyday life people would laugh at you for proposing such an absurd thing could ever be possible.