In my post, A Path to Human Autonomy I argue that the only stable equilibrium in the long term (decades) is for at least some humans to undergo intelligence augmentation. Furthermore, the augmentation trajectory leads inevitably towards fully substrate-independent digital people.
The are many paths we might take to get there, but I’m pretty sure it is that or civilizational collapse we are facing before the end of the century.
Superpowerful AI singleton guarding humanity could save us from destroying ourselves, but if we end human progress there, then this is also ending human autonomy.
I think if you really spend some time imagining a world where the AGI is smarter and more physically powerful than any human, and gets smarter and more powerful every year… You realize that “better democratic control of intent-aligned AGI” is a temporary solution at best.
Ok. I’d agree to “transhuman”. I will say that that seems meaningfully different to me than a very alien AI, with very different values, going rogue and colonizing the lightcone.
Edit: I think it would make a lot of sense if Earth were considered sort of a nature preserve or like, tribal reservation, for the “vanilla humans”, and space was the domain of transhumans, digital people, and AI.
In my post, A Path to Human Autonomy I argue that the only stable equilibrium in the long term (decades) is for at least some humans to undergo intelligence augmentation. Furthermore, the augmentation trajectory leads inevitably towards fully substrate-independent digital people.
The are many paths we might take to get there, but I’m pretty sure it is that or civilizational collapse we are facing before the end of the century.
Superpowerful AI singleton guarding humanity could save us from destroying ourselves, but if we end human progress there, then this is also ending human autonomy.
I think if you really spend some time imagining a world where the AGI is smarter and more physically powerful than any human, and gets smarter and more powerful every year… You realize that “better democratic control of intent-aligned AGI” is a temporary solution at best.
I claim that anything that’s undergone that much intelligence augmentation can’t reasonably be called “human”.
Perhaps “human autonomy” isn’t the right goal?
[Deleted a previous comment that misunderstood this as a reply to mine above]
Ok. I’d agree to “transhuman”. I will say that that seems meaningfully different to me than a very alien AI, with very different values, going rogue and colonizing the lightcone.
Edit: I think it would make a lot of sense if Earth were considered sort of a nature preserve or like, tribal reservation, for the “vanilla humans”, and space was the domain of transhumans, digital people, and AI.