I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.
Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as I get from reading e/acc stuff — it feels like the author is only thinking about the good outcomes.)
Your answer would be that (1) AGI will be far more catastrophic, and (2) this is the only way to avoid an AGI catastrophe. Personally I’m not convinced. And even if I was, it would be really emotionally difficult to devote my resources to making the world much worse (even to save it from something even worse than that). So overall, I’d much rather bet on something else that will not *itself* make the world a much worse place.
Relatedly: Does your “value drift” column include the potential value drift of simply being much, much smarter than the rest of humanity (the have-nots)? Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans. I’m not as worried about it as I am with AGI, but I’m much more worried than your column suggests. Imagine a super-intelligent Sam Altman.
And tangentially related: We actually have no idea if we can make this superintelligent baby “sane”. What you mean is that we can protect it from known genetic mental health problems, sure, but that’s not the whole picture. Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.
Any of these techniques would surely be available to only a small fraction of the world’s population.
Not true! This consideration is the main reason I included a “unit price” column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone.
greatly increase the distance between the haves and the have-nots
Eh, unless you can explain this more, I think you’ve been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it’s a movie about class via a genetics metaphor, but don’t be confused that it’s about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what… 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone’s kids.
Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans.
Stats or GTFO.
Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.
First, the two specific things you listed are quite genetically heritable. Second, 7 SDs—which is the most extreme form that I advocate for—is only a little bit outside the Gaussian human distribution. It’s just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.
I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn’t pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons])
One reason I nonetheless think this is very important to pursue is that we’re probably going to end up with superintelligent AI this century, and it’s going to be dramatically more alien and scary than the tail-risk outcomes here.
I do think the piece would be improved if it acknowledged and grappled with that more.
I feel like it would be beneficial to add another sentence or two to the “goal” section, because I’m not at all convinced that we want this. As someone new to this topic, my emotional reaction to reading this list is terror.
Any of these techniques would surely be available to only a small fraction of the world’s population. And I feel like that would almost certainly result in a much worse world than today, for many of the same reasons as AGI. It will greatly increase the distance between the haves and the have-nots. (I get the same feeling reading this as I get from reading e/acc stuff — it feels like the author is only thinking about the good outcomes.)
Your answer would be that (1) AGI will be far more catastrophic, and (2) this is the only way to avoid an AGI catastrophe. Personally I’m not convinced. And even if I was, it would be really emotionally difficult to devote my resources to making the world much worse (even to save it from something even worse than that). So overall, I’d much rather bet on something else that will not *itself* make the world a much worse place.
Relatedly: Does your “value drift” column include the potential value drift of simply being much, much smarter than the rest of humanity (the have-nots)? Anecdotally, I think there’s somewhat of an inverse correlation between intelligence and empathy in humans. I’m not as worried about it as I am with AGI, but I’m much more worried than your column suggests. Imagine a super-intelligent Sam Altman.
And tangentially related: We actually have no idea if we can make this superintelligent baby “sane”. What you mean is that we can protect it from known genetic mental health problems, sure, but that’s not the whole picture. Superintelligence will probably affect a person’s personality/values in ways we can’t predict. It could cause depression, psychopathic behavior, who knows.
Ok, I added some links to “Downside risks of genomic selection”.
Not true! This consideration is the main reason I included a “unit price” column. Germline engineering should be roughly comparable to IVF, i.e. available to middle class and up; and maybe cheaper given more scale; and certainly ought be subsidized, given the decreased lifetime healthcare costs alone.
Eh, unless you can explain this more, I think you’ve been brainwashed by Gattaca or something. Gattaca conflates class with genetic endowment, which is fine because it’s a movie about class via a genetics metaphor, but don’t be confused that it’s about genetics. Did the invention of smart phones increase or decrease the distance? In general, some technologies scale with money, and other technologies scale by bodycount. Each person only gets one brain to receive implants and stuff. Elon Musk, famously extremely rich and baby-obsessed, has what… 12 kids? A peasant could have 12 kids if they wanted to! Germline engineering would therefore be extremely democratic, at least for middle class and up. The solution, of course, is to make the tech even cheaper and more widely available, not to inflict preventable disease and disempowerment on everyone’s kids.
Stats or GTFO.
First, the two specific things you listed are quite genetically heritable. Second, 7 SDs—which is the most extreme form that I advocate for—is only a little bit outside the Gaussian human distribution. It’s just not that extreme of a change. It seems quite strange to postulate that a highly polygenic trait, if pushed to 5350 out of 10000 trait-positive variants, would suddenly cause major psychological problems, whereas natural-born people with 5250 or 5300 out of 10000 trait-positive variants are fine.
I think the terror reaction is honestly pretty reasonable. ([edit: Not, like, like, necessarily meaning one shouldn’t pursue this sort of direction on balance. I think the risks of doing this badly are real and I think the risks of not doing anything are also quite real and probably great for a variety of reasons])
One reason I nonetheless think this is very important to pursue is that we’re probably going to end up with superintelligent AI this century, and it’s going to be dramatically more alien and scary than the tail-risk outcomes here.
I do think the piece would be improved if it acknowledged and grappled with that more.
The essay is just about the methods. But I added a line or two linking to https://tsvibt.blogspot.com/2022/08/downside-risks-of-genomic-selection.html