Insofar as an AI cares-as-a-terminal-goal about keeping humans around, it will care about its own alien conception of “humans” which does not match ours, and will happily replace us with less resource-intensive (or otherwise preferable) things which we would not consider “human”.
As a side note. I’m not sure about this. It seems plausible to me that the super-stimulus-of-a-human-according-to-an-alien-AI-value-function is a human in the ways that I care about, in the same way that an em is in someways extremely different from a biological human, but is also a human in the ways I care about.
I’m not sure that I should give up on a future that’s dominated by AIs that care about a weird alien abstraction of “human” that admits extremely weird edge cases, being valueless.
As a side note. I’m not sure about this. It seems plausible to me that the super-stimulus-of-a-human-according-to-an-alien-AI-value-function is a human in the ways that I care about, in the same way that an em is in someways extremely different from a biological human, but is also a human in the ways I care about.
I’m not sure that I should give up on a future that’s dominated by AIs that care about a weird alien abstraction of “human” that admits extremely weird edge cases, being valueless.