This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
I would go further and say that augmented humans are probably more risky than AIs, because you can’t do a lot of the experimentation on a human that is legal to do to AI, and importantly it’s way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it’s easier to control an AI’s data source than a human’s data source.
This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):
I would go further and say that augmented humans are probably more risky than AIs, because you can’t do a lot of the experimentation on a human that is legal to do to AI, and importantly it’s way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it’s easier to control an AI’s data source than a human’s data source.
This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):
https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods