Define “serious”. You can get lifeview to give you embryo raw data and then run published DL models on those embryos and eek out a couple iq points that way. That’s a serious enough improvement over the norm that it would counterbalance the trend akarlin speaks of by several times. Perhaps no one will ever industrialize that service or improve current models, but then that’s another argument.
The marginal personal gain of 2 points comes with a risk of damage from mistakes by the gene editing tool used. Mistakes that can lead to lifetime disability, early cancer etc.
You probably would need a “guaranteed top 1 percent” outcome for both IQ and longevity and height and beauty and so on to be worth the risk, or far more reliable tools.
There’s no gene editing involved. The technique I just described works solely on selection. You create 10 embryos, use DL to identify the one that looks smartest, implant that one. That’s the service lifeview provides, only for health instead of psychometrics. I think it’s only marginally cost effective because of the procedures necessary, but the baby is fine.
Ok that works and yes already exists as a service or will. Issue is that it’s not very powerful. Certainly doesn’t make humans competitive in an AI future, most parents even with 10 rolls of the dice won’t have the gene pool for a top 1 percent human in any dimension.
I think you are misunderstanding me. I’m not suggesting that any amount of genetic enhancement is going to make us competitive with a misaligned superintelligence. I’m responding to the concern akarlin raised about pausing AI development by pointing out that if this tech is industrialized it will outweigh any natural problems caused by smart people having less children today. That’s all I’m saying.
Sure. I concede if by some incredible global coordination humans managed to all agree and actually enforce a ban on AGI development, then in far future worlds they could probably still do it.
What will probably ACTUALLY happen is humans will build AGI. It will behave badly. Then humans will build restricted AGI that is not able to behave badly. This is trivial and there are many descriptions on here on how a restricted AGI would be built.
The danger of course is deception. If the unrestricted AGI acts nice until it’s too late then thats a loss scenario.
We are not in an overhang for serious IQ selection based on my understanding of what people doing research in the field are saying.
Define “serious”. You can get lifeview to give you embryo raw data and then run published DL models on those embryos and eek out a couple iq points that way. That’s a serious enough improvement over the norm that it would counterbalance the trend akarlin speaks of by several times. Perhaps no one will ever industrialize that service or improve current models, but then that’s another argument.
The marginal personal gain of 2 points comes with a risk of damage from mistakes by the gene editing tool used. Mistakes that can lead to lifetime disability, early cancer etc.
You probably would need a “guaranteed top 1 percent” outcome for both IQ and longevity and height and beauty and so on to be worth the risk, or far more reliable tools.
There’s no gene editing involved. The technique I just described works solely on selection. You create 10 embryos, use DL to identify the one that looks smartest, implant that one. That’s the service lifeview provides, only for health instead of psychometrics. I think it’s only marginally cost effective because of the procedures necessary, but the baby is fine.
Ok that works and yes already exists as a service or will. Issue is that it’s not very powerful. Certainly doesn’t make humans competitive in an AI future, most parents even with 10 rolls of the dice won’t have the gene pool for a top 1 percent human in any dimension.
I think you are misunderstanding me. I’m not suggesting that any amount of genetic enhancement is going to make us competitive with a misaligned superintelligence. I’m responding to the concern akarlin raised about pausing AI development by pointing out that if this tech is industrialized it will outweigh any natural problems caused by smart people having less children today. That’s all I’m saying.
Sure. I concede if by some incredible global coordination humans managed to all agree and actually enforce a ban on AGI development, then in far future worlds they could probably still do it.
What will probably ACTUALLY happen is humans will build AGI. It will behave badly. Then humans will build restricted AGI that is not able to behave badly. This is trivial and there are many descriptions on here on how a restricted AGI would be built.
The danger of course is deception. If the unrestricted AGI acts nice until it’s too late then thats a loss scenario.