So once I create a friendly-to-me AI I am the only morally significant agent in existence? I think not.
Relevant moral significance seems to be far more determined by the ability of any agent (not limited to just themselves) to kick ass on their behalf. So infants, fish or cows can have moral significance just because someone says so (and is willing to back that up).
Fortunately for you this means that if I happen to gain overwhelming power you will remain a morally significant agent based purely on my whim.
So once I create a friendly-to-me AI I am the only morally significant agent in existence? I think not.
Relevant moral significance seems to be far more determined by the ability of any agent (not limited to just themselves) to kick ass on their behalf. So infants, fish or cows can have moral significance just because someone says so (and is willing to back that up).
Fortunately for you this means that if I happen to gain overwhelming power you will remain a morally significant agent based purely on my whim.