Yes, I think the entire concept of the AI x-risk scary idea (e.g. Clippy) is predicated on machines being orders of magnitude smarter in some ways than their human builders. If instead there is a smooth transition to increasingly more powerful human augmented intelligence, then the transformative power of AI becomes evolutionary not revolutionary. Existing power structures continue to remain in effect as we move into a post human future.
Of course there will be issues of access to augmentation technologies, bioethics panels, government regulation, etc. But these won’t be existential risks.
Yes, I think the entire concept of the AI x-risk scary idea (e.g. Clippy) is predicated on machines being orders of magnitude smarter in some ways than their human builders. If instead there is a smooth transition to increasingly more powerful human augmented intelligence, then the transformative power of AI becomes evolutionary not revolutionary. Existing power structures continue to remain in effect as we move into a post human future.
Of course there will be issues of access to augmentation technologies, bioethics panels, government regulation, etc. But these won’t be existential risks.