That’s a valid distinction. But from the perspective of serious existential risks, an AI that has a similar morality but really doesn’t like humans has almost as much potential existential risk as an Unfriendly AI.
I agree.
That’s a valid distinction. But from the perspective of serious existential risks, an AI that has a similar morality but really doesn’t like humans has almost as much potential existential risk as an Unfriendly AI.
I agree.