I don’t understand. The hard problem of alignment/CEV/etc. is that it’s not obvious how to scale intelligence while “maintaining” utility function/preferences, and this still applies for human intelligence amplification.
I suppose this is fine if the only improvement you can expect beyond human-level intelligence is “processing speed”, but I would expect superhuman AI to be more intelligent in a variety of ways.
Yeah, there’s a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift.
You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI.
The main thing is that you’re starting with a human. You start with all the stuff that determines human values—a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you’re tweaking things—but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.)
Another thing is that there’s a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can’t do that much—at least not without interfacing with many other humans. (This doesn’t apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.)
Another key hardware limit is that there’s a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like “investigate border cases”; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can’t, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can’t reprogram neuronal behavior (except through the extremely blunt-force method of drugs).
A third thing is that there’s a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there’s a huge compute overhang, and you have no idea what dial you can turn that will make the AI become a genuinely creative thinker, like a human, but not go FOOM. With humans, for the above reasons, you can guess pretty reasonably that creeping up the number of prosthetic connections, or the number of transplanted neurons, or the number of IQ-positive alleles, will have a more continuous effect.
A fourth major advantage is that you can actually see what happens. In the genomic engineering case, you can see which alleles lead to people being sociopaths or not. You get end-to-end data. And then you can just select against those (but not too strongly). (This is icky, and should be done with extreme care and caution and forethought, but consider status quo bias—are the current selection pressures on new humans’s values and behaviors really that great?)
Ok, just want to make a very small neuroscience note here: skull size isn’t literally the limiting factor alone, it’s more like ‘resources devoted to brain, and a cost function involving fetal head volume at time of birth’. Why not literally skull size? Well, because infant skulls are quite malleable, and if the brain continued growing significantly in the period after birth, it would have several months to still expand without physical blocker from the skull.
You can see this quite clearly in the sad case of an infant who has overproduction of fluid in the brain (hydroencephalus). This increased pressure within the skull damages and shrinks the brain, but at the same time significantly increases skull size. If it were neural tissue overproduction instead that was causing the increase, the infant skull would similarly expand, making room for more brain tissue. You could induce an increase in skull size by putting a special helmet on the infant that kept a slight negative air pressure outside the skull. This wouldn’t affect brain size though, brain size is controlled by the fetal development genes which tell the neural stem cells how many times to divide.
When I was looking into human intelligence enhancement, this was one of the things I researched. Experiments in mice with increasing the number of times their neural stem cells divide to give them more neurons resulted in… bigger brains, but some combination of the neurons getting crammed tightly into a skull, and reproducing more than expected thus messing up the timing patterns of various regulatory genes that help set up the repeated cortical motifs (e.g. microcolumns) resulted in the brain-expanded mice having highly disordered brains. This resulted in the behavior of them being unusually anti-social and aggressive (mice usually like living in groups).
So, it’s certainly possible to engineer a fetal brain to have extra neurons and an overall bigger brain, but it’d take more subtle adjustments to a larger number of genes, rather than intense adjustments to one or two key genes. That way the brain would stay orderly while getting larger.
Of course, you run into other problems if you increase brain size, such as the inter-regional axons having to travel further, and them being relatively fewer in proportion to the local neurons.
Overall, my conclusion after reading a bunch of papers was that substantially altering fetal genetics beyond the ranges of expression normally observed in the existing human population is a bad idea. Getting all the intelligence associated genes to be at or near their maximum values (the best known variants) does seem great though.
I think for the type of short term large-effect-size changes though, you’re much better off with brain-computer-interfaces as a focus.
Thanks, this is helpful info. I think when I’m saying skull size, I’m bundling together several things:
As you put it, “a cost function involving fetal head volume at time of birth”.
The fact that adults, without surgery, have a fixed skull size to worth with (so that, for example, any sort of drug, mental technique, or in vivo editing would have some ceiling on the results).
The fact—as you give some color to—trying to do engineering to the brain to get around this barrier, potentially puts you up against very thorny bioengineering problems, because then you’re trying to go from zero to one on a problem that evolution didn’t solve for you. Namely, evolution didn’t solve “how to have 1.5x many neurons, and set up the surrounding support structures appropriately”. I agree brain implants are the most plausible way around this.
The fact that evolution didn’t solve the much-bigger-brain problem, and so applying all the info that evolution did work out regarding building capable brains, would still result in something with a comparable skull size limit, which would require some other breakthrough technology to get around.
(And I’m not especially saying that there’s a hard evolutionary constraint with skull size, which you might have been responding to; I think we’d agree that there’s a strong evolutionary pressure on natal skull size.)
Getting all the intelligence associated genes to be at or near their maximum values (the best known variants) does seem great though.
Actually I’d expect this to be quite bad, though I’m wildly guessing. One of the main reasons I say the target is +7SDs, maybe +8, rather “however much we can get”, is that the extreme version seems much less confidently safe. We know humans can be +6SDs. It would be pretty surprising if you couldn’t push out a bit from that, if you’re leveraging the full adaptability of the human ontogenetic program. But going +15SDs or whatever would probably be more like the mice you mentioned. Some ways things could go wrong, from “Downsides …”:
skull problems (size, closure), blood flow problems to brain tissue, birthing problems, brain cancer (maybe correlated with neurogenesis / plasticity?), metabolic demands (glucose, material for myelination and neurotransmitters, etc.), mechanical overpacking in the brain (e.g. restricting CSF, neurotransmitter flow, etc.), interneuronal “conflict” (if humans are tuned to be near a threshold that allows exploration while avoiding intractable conflict), plaque / other waste, exhausting capacity of some shared structures such as the corpus callosum, exhausting physical room for receptors / pumps / synapse attachment, disrupted balance of ions.
You wrote:
I think for the type of short term large-effect-size changes though, you’re much better off with brain-computer-interfaces as a focus.
Can you give more detail on what might actually work? If it involves adding connections somehow, what are you connecting to what? How many connections do you think you need to get large effects on problem solving ability? What are your main reasons for thinking it would have large effects? What do you think are the specific technical bottlenecks to getting that technology?
I don’t understand. The hard problem of alignment/CEV/etc. is that it’s not obvious how to scale intelligence while “maintaining” utility function/preferences, and this still applies for human intelligence amplification.
I suppose this is fine if the only improvement you can expect beyond human-level intelligence is “processing speed”, but I would expect superhuman AI to be more intelligent in a variety of ways.
Yeah, there’s a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift.
You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI.
The main thing is that you’re starting with a human. You start with all the stuff that determines human values—a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you’re tweaking things—but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.)
Another thing is that there’s a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can’t do that much—at least not without interfacing with many other humans. (This doesn’t apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.)
Another key hardware limit is that there’s a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like “investigate border cases”; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can’t, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can’t reprogram neuronal behavior (except through the extremely blunt-force method of drugs).
A third thing is that there’s a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there’s a huge compute overhang, and you have no idea what dial you can turn that will make the AI become a genuinely creative thinker, like a human, but not go FOOM. With humans, for the above reasons, you can guess pretty reasonably that creeping up the number of prosthetic connections, or the number of transplanted neurons, or the number of IQ-positive alleles, will have a more continuous effect.
A fourth major advantage is that you can actually see what happens. In the genomic engineering case, you can see which alleles lead to people being sociopaths or not. You get end-to-end data. And then you can just select against those (but not too strongly). (This is icky, and should be done with extreme care and caution and forethought, but consider status quo bias—are the current selection pressures on new humans’s values and behaviors really that great?)
Ok, just want to make a very small neuroscience note here: skull size isn’t literally the limiting factor alone, it’s more like ‘resources devoted to brain, and a cost function involving fetal head volume at time of birth’. Why not literally skull size? Well, because infant skulls are quite malleable, and if the brain continued growing significantly in the period after birth, it would have several months to still expand without physical blocker from the skull.
You can see this quite clearly in the sad case of an infant who has overproduction of fluid in the brain (hydroencephalus). This increased pressure within the skull damages and shrinks the brain, but at the same time significantly increases skull size. If it were neural tissue overproduction instead that was causing the increase, the infant skull would similarly expand, making room for more brain tissue. You could induce an increase in skull size by putting a special helmet on the infant that kept a slight negative air pressure outside the skull. This wouldn’t affect brain size though, brain size is controlled by the fetal development genes which tell the neural stem cells how many times to divide.
When I was looking into human intelligence enhancement, this was one of the things I researched. Experiments in mice with increasing the number of times their neural stem cells divide to give them more neurons resulted in… bigger brains, but some combination of the neurons getting crammed tightly into a skull, and reproducing more than expected thus messing up the timing patterns of various regulatory genes that help set up the repeated cortical motifs (e.g. microcolumns) resulted in the brain-expanded mice having highly disordered brains. This resulted in the behavior of them being unusually anti-social and aggressive (mice usually like living in groups).
So, it’s certainly possible to engineer a fetal brain to have extra neurons and an overall bigger brain, but it’d take more subtle adjustments to a larger number of genes, rather than intense adjustments to one or two key genes. That way the brain would stay orderly while getting larger.
Of course, you run into other problems if you increase brain size, such as the inter-regional axons having to travel further, and them being relatively fewer in proportion to the local neurons.
Overall, my conclusion after reading a bunch of papers was that substantially altering fetal genetics beyond the ranges of expression normally observed in the existing human population is a bad idea. Getting all the intelligence associated genes to be at or near their maximum values (the best known variants) does seem great though.
I think for the type of short term large-effect-size changes though, you’re much better off with brain-computer-interfaces as a focus.
Thanks, this is helpful info. I think when I’m saying skull size, I’m bundling together several things:
As you put it, “a cost function involving fetal head volume at time of birth”.
The fact that adults, without surgery, have a fixed skull size to worth with (so that, for example, any sort of drug, mental technique, or in vivo editing would have some ceiling on the results).
The fact—as you give some color to—trying to do engineering to the brain to get around this barrier, potentially puts you up against very thorny bioengineering problems, because then you’re trying to go from zero to one on a problem that evolution didn’t solve for you. Namely, evolution didn’t solve “how to have 1.5x many neurons, and set up the surrounding support structures appropriately”. I agree brain implants are the most plausible way around this.
The fact that evolution didn’t solve the much-bigger-brain problem, and so applying all the info that evolution did work out regarding building capable brains, would still result in something with a comparable skull size limit, which would require some other breakthrough technology to get around.
(And I’m not especially saying that there’s a hard evolutionary constraint with skull size, which you might have been responding to; I think we’d agree that there’s a strong evolutionary pressure on natal skull size.)
Actually I’d expect this to be quite bad, though I’m wildly guessing. One of the main reasons I say the target is +7SDs, maybe +8, rather “however much we can get”, is that the extreme version seems much less confidently safe. We know humans can be +6SDs. It would be pretty surprising if you couldn’t push out a bit from that, if you’re leveraging the full adaptability of the human ontogenetic program. But going +15SDs or whatever would probably be more like the mice you mentioned. Some ways things could go wrong, from “Downsides …”:
You wrote:
Can you give more detail on what might actually work? If it involves adding connections somehow, what are you connecting to what? How many connections do you think you need to get large effects on problem solving ability? What are your main reasons for thinking it would have large effects? What do you think are the specific technical bottlenecks to getting that technology?
BTW, do not maim children in the name of X-risk reduction (or in any other name).
Yes, another good reason to focus on interventions for consenting adults, rather than fetuses or infants.