Yeah, there’s a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift.
You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI.
The main thing is that you’re starting with a human. You start with all the stuff that determines human values—a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you’re tweaking things—but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.)
Another thing is that there’s a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can’t do that much—at least not without interfacing with many other humans. (This doesn’t apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.)
Another key hardware limit is that there’s a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like “investigate border cases”; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can’t, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can’t reprogram neuronal behavior (except through the extremely blunt-force method of drugs).
A third thing is that there’s a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there’s a huge compute overhang, and you have no idea what dial you can turn that will make the AI become a genuinely creative thinker, like a human, but not go FOOM. With humans, for the above reasons, you can guess pretty reasonably that creeping up the number of prosthetic connections, or the number of transplanted neurons, or the number of IQ-positive alleles, will have a more continuous effect.
A fourth major advantage is that you can actually see what happens. In the genomic engineering case, you can see which alleles lead to people being sociopaths or not. You get end-to-end data. And then you can just select against those (but not too strongly). (This is icky, and should be done with extreme care and caution and forethought, but consider status quo bias—are the current selection pressures on new humans’s values and behaviors really that great?)
Yeah, there’s a value-drift column in the table of made-up numbers. Values matter and are always at stake, and are relatively more at stake here; and we should think about how to do these things in a way that avoids core value drift.
You have major advantages when creating humans but tweaked somehow, compared to creating de novo AGI.
The main thing is that you’re starting with a human. You start with all the stuff that determines human values—a childhood, basal ganglia giving their opinions about stuff, a stomach, a human body with human sensations, hardware empathy, etc. Then you’re tweaking things—but not that much. (Except for in brain emulation, which is why it gets the highest value drift rating.)
Another thing is that there’s a strong built-in limit on the strength of one human: skullsize. (Also other hardware limits: one pair of eyes and hands, one voicebox, probably 1 or 1.5 threads of attention, etc.) One human just can’t do that much—at least not without interfacing with many other humans. (This doesn’t apply for brain emulation, and potentially applies less for some brain-brain connectivity enhancements.)
Another key hardware limit is that there’s a limit on how much you can reprogram your thinking, just by introspection and thinking. You can definitely reprogram the high-level protocols you follow, e.g. heuristics like “investigate border cases”; you can maybe influence lower-level processes such as concept-formation by, e.g., getting really good at making new words, but you maybe can’t, IDK, tell your brain to allocate microcolumns to analyzing commonalities between the top 1000 best current candidate microcolumns for doing some task; and you definitely can’t reprogram neuronal behavior (except through the extremely blunt-force method of drugs).
A third thing is that there’s a more plausible way to actually throttle the rate of intelligence increase, compared to AI. With AI, there’s a huge compute overhang, and you have no idea what dial you can turn that will make the AI become a genuinely creative thinker, like a human, but not go FOOM. With humans, for the above reasons, you can guess pretty reasonably that creeping up the number of prosthetic connections, or the number of transplanted neurons, or the number of IQ-positive alleles, will have a more continuous effect.
A fourth major advantage is that you can actually see what happens. In the genomic engineering case, you can see which alleles lead to people being sociopaths or not. You get end-to-end data. And then you can just select against those (but not too strongly). (This is icky, and should be done with extreme care and caution and forethought, but consider status quo bias—are the current selection pressures on new humans’s values and behaviors really that great?)