For your second AI, it is worth distinguishing between “friendly” and “Friendly”—it is Friendly, in the sense that it understands and appreciates the relatively narrow target that is human morality, it just is unimpressed with humans as allies.
That’s a valid distinction. But from the perspective of serious existential risks, an AI that has a similar morality but really doesn’t like humans has almost as much potential existential risk as an Unfriendly AI.
For your second AI, it is worth distinguishing between “friendly” and “Friendly”—it is Friendly, in the sense that it understands and appreciates the relatively narrow target that is human morality, it just is unimpressed with humans as allies.
That’s a valid distinction. But from the perspective of serious existential risks, an AI that has a similar morality but really doesn’t like humans has almost as much potential existential risk as an Unfriendly AI.
I agree.