Sacrificial transhumanism — a person may feel that humanity is worth sacrificing in order to achieve a transhumanist future. Even if transhumanism and/or cyborgism are fine and good for some people to pursue, I can’t get behind the idea that it’s okay to sacrifice humanity to achieve these developments, because I think it’s unnecessarily disloyal to humanity. Still, I’ve met people who feel this way.
I just submitted an essay to the Cosmos essay competition about maintaining human autonomy in the age of AI. My take is quite different from yours. I don’t expect that we will do more than delay and slightly slow the coming intelligence explosion. I think there are just too many disjoint paths forward, and the costs of blocking them all off are too extreme to expect any government body to bite that bullet (much less all the world’s governments in coordination).
So my solution, in short, is to amplify humanity, to put our fate in the hands of those brave souls who pursue transhumanism in order to keep up with the growing intelligence and power of AI. This probably means a short period of cyborgs, with pure digital humans taking over after a few years. I don’t think it’s necessary for us to sacrifice humanity to achieve transhumanism. I expect the vast majority of humans to remain unenhanced. It’s just that, then they will be ‘wards of the state’, relatively speaking at such a large mental disadvantage that they will be at the mercy of the transhumans and the AI.
This is clearly a risky situation, but could be ok so long as the transhumans remain human in their fundamental values, and the AIs remain aligned to our values (or at least obedient and safe to use). I see this as the only plausible path forward. I think all other paths are unworkable.
I just submitted an essay to the Cosmos essay competition about maintaining human autonomy in the age of AI. My take is quite different from yours. I don’t expect that we will do more than delay and slightly slow the coming intelligence explosion. I think there are just too many disjoint paths forward, and the costs of blocking them all off are too extreme to expect any government body to bite that bullet (much less all the world’s governments in coordination).
So my solution, in short, is to amplify humanity, to put our fate in the hands of those brave souls who pursue transhumanism in order to keep up with the growing intelligence and power of AI. This probably means a short period of cyborgs, with pure digital humans taking over after a few years. I don’t think it’s necessary for us to sacrifice humanity to achieve transhumanism. I expect the vast majority of humans to remain unenhanced. It’s just that, then they will be ‘wards of the state’, relatively speaking at such a large mental disadvantage that they will be at the mercy of the transhumans and the AI.
This is clearly a risky situation, but could be ok so long as the transhumans remain human in their fundamental values, and the AIs remain aligned to our values (or at least obedient and safe to use). I see this as the only plausible path forward. I think all other paths are unworkable.
[Edit: here’s my essay, expanded a bit: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy ]