Ctrl+F and replace humanism with “transhumanism” and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they’re as different from their common Homo sapiens ancestor as a rat and a whale.
I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).
If you met a race of aliens, intelligent, friendly, etc., would you “turn into a Warhammer 40K Inquisitor” who considers the xenos unworthy of any moral consideration whatsoever? If not, why not?
I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There’s only one universe to share, and only so much in the way of resources in it, even if it’s a staggering amount. The last thing we need are potential “Greedy Aliens” in the Hansonian sense.
So while I wouldn’t give the aliens zero moral value, it would be less than I’d give for another human or human-derivative intelligence, for that fact alone.
Honestly that’s just not a present concern so I don’t even bother thinking about it too much—there’s certainly plenty of room for humans modifying themselves which I would consider ok, and some I would probably consider a step too far but it’s not going to be my decision to make anyway; I don’t know as much as those who might need to make such decisions will. So yeah, it’s an asterisk for me too, but I think we can satisfyingly call my viewpoint “humanism” with the understanding that it won’t be one or two cyber implants who change that (though I don’t exclude the possibility that thorough enough modification in a bad direction might make someone not human any more).
Ctrl+F and replace humanism with “transhumanism” and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they’re as different from their common Homo sapiens ancestor as a rat and a whale.
I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).
If you met a race of aliens, intelligent, friendly, etc., would you “turn into a Warhammer 40K Inquisitor” who considers the xenos unworthy of any moral consideration whatsoever? If not, why not?
I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There’s only one universe to share, and only so much in the way of resources in it, even if it’s a staggering amount. The last thing we need are potential “Greedy Aliens” in the Hansonian sense.
So while I wouldn’t give the aliens zero moral value, it would be less than I’d give for another human or human-derivative intelligence, for that fact alone.
Honestly that’s just not a present concern so I don’t even bother thinking about it too much—there’s certainly plenty of room for humans modifying themselves which I would consider ok, and some I would probably consider a step too far but it’s not going to be my decision to make anyway; I don’t know as much as those who might need to make such decisions will. So yeah, it’s an asterisk for me too, but I think we can satisfyingly call my viewpoint “humanism” with the understanding that it won’t be one or two cyber implants who change that (though I don’t exclude the possibility that thorough enough modification in a bad direction might make someone not human any more).