If not, why aren’t you in the camp of those who wish to improve human intelligence?
I’ll take this one because I’m almost certain Eliezer would answer the same way.
Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.
I disagree. Human intelligence is clearly misoptimised for many goals, and I see no clear evidence that it’s easier to design a new intelligence from scratch than to optimise the human one.
They have very different possible effects “FOOM!” vs. “We are awaiting GFDCA [Genetics, Food Drugs and Cybernetics Administration] approval of this new implant/chimerism/genehack”, so the average impact of human-optimisation may be lower, but my probability estimate for human-improvement tech is much higher.
I’ll take this one because I’m almost certain Eliezer would answer the same way.
Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.
I disagree. Human intelligence is clearly misoptimised for many goals, and I see no clear evidence that it’s easier to design a new intelligence from scratch than to optimise the human one.
They have very different possible effects “FOOM!” vs. “We are awaiting GFDCA [Genetics, Food Drugs and Cybernetics Administration] approval of this new implant/chimerism/genehack”, so the average impact of human-optimisation may be lower, but my probability estimate for human-improvement tech is much higher.