I agree that this may indeed be a likely outcome. But this raises the question, over what timeframe are we talking about, and what does extinction look like?
Humanity going extinct in 100 years because the AGI has decided the cheapest, lowest risk way to gradually automate away humans is to augment human biology to such an extent that we are effectively hybrid machines doesn’t strike me as a bad thing, or as “curtailing our capability”, if what remains is a hybrid biomechanical species which still retains main facets of humanity, that doesn’t seem bad at all. That seems great. The fact that humans 100 years from now may not be able to procreate with humans today, because of genetic alterations that increase our longevity, emotional intelligence, and health doesn’t strike me as a bad outcome.
I’d be curious why you picked 100 years as the time frame it would take for the AI to develop this technology. How do you expect technology to progress over time in this scenario?
I’d be curious why you picked 100 years as the time frame it would take for the AI to develop this technology. How do you expect technology to progress over time in this scenario?