The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
I agree with this comment and feel that it offers strong points against Eliezer’s way of talking about this issue.