Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy.
However, rather than “objectively better”, we could be more clear by saying “more in line with our morals” or some such. It’s not as if our morals came from nowhere, after all.
See also: “The Bedrock of Morality: Arbitrary?”