Zvi Recently asked on Twitter:
If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?
To which Eliezer replied:
Human intelligence augmentation.
And then elaborated:
No time for GM kids to grow up, so:
collect a database of genius genomes and try to interpret for variations that could have effect on adult state rather than child development
try to disable a human brain’s built-in disablers like rationalization circuitry, though unfortunately a lot of this seems liable to have mixed functional and dysfunctional use, but maybe you can snip an upstream trigger circuit
upload and mod the upload
neuralink shit but aim for 64-node clustered humans
This post contains the most in-depth analysis of human intelligence augmentation (HIA) I have seen recently, and provides the following taxonomy for applications of neurotechnology to alignment:
BCIs to extract human knowledge
neurotech to enhance humans
understanding human value formation
cyborgism
whole brain emulation
BCIs creating a reward signal.
It also includes the opinions of attendees (stated to be 16 technical researchers and domain specialists) who provide the following analysis of these options:
Outside of cyborgism, I have seen very little recent discussion regarding HIA with the exclusion of the above post. This could be because I am simply looking in the wrong places, or it could be because the topic is not discussed much in the context of being a legitimate AI safety agenda. The following is a list of questions I have about the topic:
Does anyone have a comprehensive list of organizations working on HIA or related technologies?
Perhaps producing something like this map for HIA might be valuable.
Does independent HIA research exist outside of cyborgism?
My intuition is that HIA research probably has a much higher barrier to entry than say mechanistic interpretability (both in cost and background education). Does this make it unfit for independent research?
(If you think HIA is a good agenda: ) What are some concrete steps that we (members of the EA and LW community) can take to push forward HIA for the sake of AI safety?
EDIT: “We have to Upgrade” is another recent piece on HIA which has some useful discussion in the comments and in which some people give their individual thoughts, see: Carl Shulman’s response and Nathan Helm-Burger’s response.
I think somatic gene therapy, while technically possible in principal, is extremely unpromising for intelligence augmentation. Creating a super-genius is almost trivial with germ-line engineering. Provided we know enough causal variants, one needs to only make a low-hundreds number of edits to one cell to make someone smarter than any human that has ever lived. With somatic gene therapy you would almost certainly have to alter billions of cells to get anywhere.
Networking humans is interesting but we have nowhere close to the bandwidth needed now. As a rough guess lets suppose we need similar bandwidth to the corpus callosum, neuralink is ~5 OOMs off.
I suspect human intelligence enhancement will not progress much in the next 5 years, not counting human/ML hybrid systems.
GPT-3 manages with mere 12K dimensions on the residual stream (for 175B parameters), which carries all information between the layers. So tens of thousands of connections might turn out to be sufficient.
If so, one might imagine getting there via high-end non-invasive BCI (as long as one uses closed loops, so that the electronic side might specifically aim at changing the signal it reads from the biological entity, and that’s how the electronic side would know that its signals are effective).
Of course, the risks of doing that are quite formidable even with non-invasive BCI, and various precautions should be taken. (But at least there is no surgery, plus one would have much quicker and less expensive iterations and a much less regulated environment, since nothing which is formally considered a medical procedure seems to be involved.)
One might want to try something like this in parallel with Neuralink-style efforts...
Eh, I mean, everything I hear from geneticists on any topic suggests that DNA interactions are crazy complex because the whole thing wasn’t designed to be a sensible system of switches you just turn on and off (wasn’t designed at all, to be fair). I’d really really be suspicious of this sort of confidence.
Also honestly I think this actually incurs into problems analogue to AI. We talk about AI alignment and sure, humans shouldn’t have such a large potential goal space, but:
you just messed with a bunch of brain stuff so who knows what the fuck have you done, maybe in making the brain more rational you’ve also just accidentally removed all empathy or baseline care for other humans
regardless of 1 imagine now having these super-genius mutant kids being raised in I assume some specific nurturing environment to help them flourish… dunno, I don’t think that results in some particularly salt-of-the-earth people with empathetic goals. Being raised as a demigod savior of humanity by people who all invariably feel much stupider than you seems like exactly what you’d do to create some kind of supervillain.
And that’s of course suspending ethical judgement on the whole thing or the way in which germline editing can go wrong (and thus scores of children actually born with weird genetic defects or mental disabilities).
Not really true—known SNP mutations associated with high intelligence have relatively low effect in total. The best way to make a really smart baby with current techniques is with donor egg and sperm, or cloning.
It is also possible that variance in intelligence among humans is due to something analogous to starting values in neural networks—lucky/crafted values can result in higher final performance, but getting those values into an already established network just adds noise. You can’t really change macrostructures in the brain with gene therapy in adults, after all.
Mostly, a useless dead end. The big problem is even assuming it’s socially acceptable to do it, the stuff genetic engineering can do is either locked behind massive time and children investments, or is way too weak/harmful to be of much use. It’s an interesting field, with a whole lot of potential, but I’d only support expand it’s social acceptability and doing basic research right now, given that I see very few options for genetics.
Also, how much somatic gene editing, not how much gamete gene editing is the key taut constraint.
Maybe not as long as you’re thinking; people can be very intelligent and creative at young ages (and this may be amplified with someone gene-edited to have high intelligence). ‘Adolescence’ is mostly a recent social construction, and a lot of norms/common beliefs about children exist more to keep them disempowered.
The bigger issue is that the stronger genetic modifications requires children at all, and this time still matters even under optimistic circumstances of how much we can cut the maturity process away, and there’s a far greater problem with this type of modification:
It only works if we assume population growth or life extension, and one is a huge challenge in itself, and the population growth assumptions is probably wrong, and the big problem here is the fertility rate is essentially way down from several decades ago or several centuries ago, and this is a big problem, as it sinks schemes of intelligence augmentation that rely on new children. In particular, the world population will stop growing, and we might only have 30 billion new humans born, according to new models.
So yeah, I am still pessimistic around gamete genetic strategies for human enhancement.
The population growth problem should be somewhat addressed by healthspan extension. A big reason as to why people aren’t having kids now is that they lack the resources—be it housing, money, or time. If we could extend the average healthspan by a few decades, then older people who have spent enough time working to accumulate those resources, but are too old to raise children, should now be able have kids. Moreover, it means that people who are already have many kids but have just become too old will also be able to have more. For those reasons, I don’t think a future birth limit of 30 billion is particularly reasonable.
However, I don’t think it will make a difference, at least for addressing AI. Once computing reaches a certain level of advancement, it will simply be unfeasible for something the size of a human brain, no matter how enhanced, to compete with a superintelligence running on a supercomputer the size of a basketball court. And that level of computing/AI advancement will almost certainly be achieved before the discussed genetic enhancement will ever bear fruit, probably even before it’s made legal. Moreover, it’s doubtful we’ll see any significant healthspan extensions particularly long before achieving ASI, so that makes it even less relevant, although I don’t think any of these concerns were particularly significant in the first place as it also seems like we’ll see ASI long before global population decline.
I mean, that makes the likely death and suffering toll sound more acceptable I guess as PR expressions go, yeah.
Hm, can you explain more about this? Sorry that I’ve come late here, but I don’t understand what your comment is referring to or why you think the way you do.
Well, I interpret “children investments” here as “children who will be involved in the augmentation experiments”. I don’t expect germline modification to be something that would succeed at first attempt (it’s one of the reason why it’s considered ethically problematic to begin with). Basically point B might be better than point A, but the path from A to B almost surely involves some very low lows as we learn from trial and error, etc. I found the clinical nature of the expression dryly funny as I think it would realistically hide quite a hefty human cost. That’s not even including the obvious political complications and general societal second order risks.
Well, it wasn’t a complete look at the issues of gamete/germline modification, but you pointed out another problem which I didn’t include to save space and time, though thankfully if you want to avoid extreme modifications, it’s a lot safer to do it, thanks to an important insight by GeneSmith:
Ah, that makes sense. I guess if interactions were too complex it’d take some miraculous multi-step coincidence to produce a useful mutation, and there would be a lot more genetic illnesses.
There’s interesting possibilities with BCI that you don’t list. But the bandwidth is too low due to the butcher number. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html
Not doing things because AGI comes soon is a mistake: https://tsvibt.blogspot.com/2023/07/views-on-when-agi-comes-and-on-strategy.html
Germline engineering is feasible, but society anti-wants it.
I agree that electrode-based BCIs don’t scale, but electrode BCIs are just the first generation of productized interfaces. The next generation of BCIs holds a great deal of promise. Depending on AGI timelines, they may still be too far out. They’re still probably worth developing with an eye toward alignment given that they have primarily non-overlapping resources (funding, expertise, etc.).
Butcher number & Stevenson/Kording scaling discussed more in the comments here: https://www.lesswrong.com/posts/KQSpRoQBz7f6FcXt3#comments
I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.
One example would be the below:
https://www.cnet.com/science/ai-could-be-made-obsolete-by-oi-biocomputers-running-on-human-brain-cells/
Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing
[eg, reduced-inflammation neural interfaces].
MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the population)
Intelligence enhancement is not the only path [there are others such as sensing/promoting better emotional regulation + neurofeedback] which have heavy disproportionate impact and are underinvestigated (neurofeedback, in particular, seems to work really well for some people, but b/c there are so many practitioners and it’s very hit-and-miss, it takes a lot of capital [more so than time] to see if it really works for any particular person)
Reducing the rate at which brains age (over time) is feasible + maximizes lifetime human intelligence/compute + and there is lots of low-hanging fruit in this area (healthier diets alone can give 10 extra years), especially because there is huge variation in how much brains age.
https://www.linkedin.com/posts/neuro1_lab-grown-human-brain-organoids-go-animal-free-activity-7085372203331936257-F8YB?utm_source=share&utm_medium=member_android
I’m friends with a group in Michigan which is trying to do this. The upside risk is unknown because there are so many unknowns (but so little investment too, at the same time) - they also broaden the pool of people who can contribute, since they don’t need to be math geniuses. There aren’t really limits on how to grow organoids (a major question is whether or not one can grow them larger than the actual brain, without causing them to have the degeneracies of autistic brains.). More people use them to focus on drug testing than computation.
I know many are trying 2D solutions, but 3D is important too (https://scitechdaily.com/japanese-scientists-construct-complex-3d-organoids-with-ingenious-device/?expand_article=1&fbclid=IwAR0n429zFV4uQnyds94tuTCFbPNdSdJecpMreWilv6kpQTRacgw64LTTZp4)
Doing vasculature well is one of the hardest near-term problems (frontierbio is working on this though some have questions of whether or not the blood vessels are “real vessels”), but scaffolding is also one (maybe there are different ways to achieve the same level of complexity with alternative scaffolding—https://www.nature.com/articles/s41598-022-16247-7 ). Thought emporium used plant tissue exteriors for scaffolding—though this obvs isn’t enough for complex brain tissue.
Bird brain organoids may be an interesting substrate b/c bird brains do more than mammalian brains with limited volume, and also don’t depend as much on 5-6 layer cortical architecture or complex gyrification/folding structure.
BTW, carbon-nanotube computing might be worth exploring. Here’s a preliminary app: https://www.americanscientist.org/article/tiny-lights-in-the-brains-black-box
look up thought emporium!! Potentially tangentially relevant: https://www.nature.com/articles/s42003-023-04893-0, Morphoceuticals, https://www.frontiersin.org/articles/10.3389/fnins.2019.01156/full, augmentationlab.org, https://minibrain.beckman.illinois.edu/2022/05/06/webinar-review-understanding-human-brain-structure-and-function-with-cerebral-organoids/, https://www.spectrumnews.org/news/organoids-hint-at-origins-of-enlarged-brains-in-autistic-people/ (INSAR has sev presentations of those who grow autistic brain organoids)
(talins)!
[note: I know that current progress of organoid research seems like it will never go fast enough to “make it”, but discontinuous rates of progress cannot be ruled out]
Pretty positive. I suspect that playing a lot of ordinary video games as a child contributed at least somewhat positively to my current level of fluid intelligence.
Playing games or doing training exercises specifically designed to train fluid intelligence and reasoning ability, using a BCI or other neurotech, seems like it could plausibly move the needle at least a bit, in both children and adults.
And I think even small enhancements could lead to large, compounding benefits when applied at scale, due to better coordination ability and general improvements to baseline sanity.
The research on brain training seems to disagree with you about how much it could have helped non-task-specific intelligence.
Maybe, in-vivo genetic editing of the brain is possible. Adenoviruses that are a normal delivery mechanism for genetic therapy can pass hemo-encephalic barrier, so seems plausible to an amateur.
(Not obvious that this works in adult organisms, maybe genes activate while fetus grows or during childhood.)