Shouldn’t someone (some organization) be putting a lot of effort and resources into this strategy (quoted below) in the hope that AI timelines are still long enough for the strategy to work? With enough resources, it should buy at least a few percentage of non-doom probability (even now)?
Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers).
For starters, why aren’t we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community? For example, on their workplace benefits page, OpenPhil says:
We offer a family forming benefit that supports employees and their partners with expenses related to family forming, such as fertility treatment, surrogacy, or adoption. This benefit is available to all eligible employees, regardless of age, sex, sexual orientation, or gender identity.
Seems a small step from there to making “we cover IVF for anyone who wants (even if your fertility is fine) + LifeView polygenic scores” into a standard part of the alignment-research-agency benefits package. Of course, LifeView only offers health scores, but they will also give you the raw genetic data. Processing this genetic data yourself, DIY style, could be made easier—maybe there could be a blog post describing how to use an open-source piece of software and where to find the latest version of EA3, and so forth.
All this might be a lot of trouble for (if you are pessimistic about PGT’s potential) a rather small benefit. We are not talking Von Neumanns here. But it might be worth creating a streamlined community infrastructure around this anyways, just in case the benefit becomes larger as our genetic techniques improve.
You get the same intelligence gain in a way I find considerably less dubious by lifting poor kids out of poverty, giving them access to decent nutrition and safety, and education. Cheaper, too. Also more just. And more diverse.
I don’t see any realistic world where you both manage to get government permission to allow you to genetically engineer children for intelligence and they let you specifically raise them to do safety work far enough in advance that they actually have time to contribute and in a way that outweighs any PR risk.
Embryo selection for intelligence does not require government permission to do. You can do it right now. You only need the models and the DNA. I’ve been planning on releasing a website that allows people to upload genetic data they get from LifeView for months, but I haven’t gotten around to finishing it for the same reason I think that others aren’t.
Part of me wants to not post this just because I want to be the first to make the website, but that seems immoral, so, here.
Both cloning and embryo selection are not illegal in many places, including the US. (This article suggests that for cloning you may have to satisfy the FDA’s safety concerns, which perhaps ought to be possible for a well-resourced organization.) And you don’t have to raise them specifically for AI safety work. I would probably announce that they will be given well-rounded educations that will help them solve whatever problems that humanity may face in the future.
Sounds good to me! Anyone up for making this an EA startup?
Having more Neumann level geniuses around seems like an extremely high impact intervention for most things, not even just singularity related ones.
As for tractability, I can’t say anything about how hard this would be to get past regulators, or how much engineering work is missing for making human cloning market ready, but finding participants seems pretty doable? I’m not sure yet whether I want children, but if I decide I do, I’d totally parent a Neumann clone. If this would require moving to some country where cloning isn’t banned, I might do that as well. I bet lots of other EAs would too.
Shouldn’t someone (some organization) be putting a lot of effort and resources into this strategy (quoted below) in the hope that AI timelines are still long enough for the strategy to work? With enough resources, it should buy at least a few percentage of non-doom probability (even now)?
Sure, why not. Sounds dignified to me.
For starters, why aren’t we already offering the most basic version of this strategy as a workplace health benefit within the rationality / EA community? For example, on their workplace benefits page, OpenPhil says:
Seems a small step from there to making “we cover IVF for anyone who wants (even if your fertility is fine) + LifeView polygenic scores” into a standard part of the alignment-research-agency benefits package. Of course, LifeView only offers health scores, but they will also give you the raw genetic data. Processing this genetic data yourself, DIY style, could be made easier—maybe there could be a blog post describing how to use an open-source piece of software and where to find the latest version of EA3, and so forth.
All this might be a lot of trouble for (if you are pessimistic about PGT’s potential) a rather small benefit. We are not talking Von Neumanns here. But it might be worth creating a streamlined community infrastructure around this anyways, just in case the benefit becomes larger as our genetic techniques improve.
You get the same intelligence gain in a way I find considerably less dubious by lifting poor kids out of poverty, giving them access to decent nutrition and safety, and education. Cheaper, too. Also more just. And more diverse.
I don’t see any realistic world where you both manage to get government permission to allow you to genetically engineer children for intelligence and they let you specifically raise them to do safety work far enough in advance that they actually have time to contribute and in a way that outweighs any PR risk.
Embryo selection for intelligence does not require government permission to do. You can do it right now. You only need the models and the DNA. I’ve been planning on releasing a website that allows people to upload genetic data they get from LifeView for months, but I haven’t gotten around to finishing it for the same reason I think that others aren’t.
Part of me wants to not post this just because I want to be the first to make the website, but that seems immoral, so, here.
Interesting. I had no idea.
Both cloning and embryo selection are not illegal in many places, including the US. (This article suggests that for cloning you may have to satisfy the FDA’s safety concerns, which perhaps ought to be possible for a well-resourced organization.) And you don’t have to raise them specifically for AI safety work. I would probably announce that they will be given well-rounded educations that will help them solve whatever problems that humanity may face in the future.
Sounds good to me! Anyone up for making this an EA startup?
Having more Neumann level geniuses around seems like an extremely high impact intervention for most things, not even just singularity related ones.
As for tractability, I can’t say anything about how hard this would be to get past regulators, or how much engineering work is missing for making human cloning market ready, but finding participants seems pretty doable? I’m not sure yet whether I want children, but if I decide I do, I’d totally parent a Neumann clone. If this would require moving to some country where cloning isn’t banned, I might do that as well. I bet lots of other EAs would too.