Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It’s what people are automatically going to think you’re talking about if you go around shouting “Yes Virginia, there are moral facts after all!”
Agreed that this is important. (ETA: I now think Eliezer is right about this.)
Meanwhile, the general public has a term for the view that you and I share: they call it “moral relativism”.
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b), and I don’t think there’s any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans.
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b)
I think that’s uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person’s mouth can ever be wrong is scarcely worth discussing.
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans.
The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification. We shouldn’t save babies because-morally it’s the human thing to do but because-morally it’s the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you’re part of.
I agree that it seems as though I just don’t understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don’t understand.
I don’t claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don’t feel strongly that saving babies is “right”, whenever you write, “saving babies is the right thing to do”, I translate this as, “X is the right thing to do” where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.
Then you write, “What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness.”
How is wrongness or rightness baked into a subject matter?
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
Agreed that this is important. (ETA: I now think Eliezer is right about this.)
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b), and I don’t think there’s any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.
I think that’s uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person’s mouth can ever be wrong is scarcely worth discussing.
The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
This is far from uncontroversial in the general population.
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification. We shouldn’t save babies because-morally it’s the human thing to do but because-morally it’s the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
I agree with this comment and feel that it offers strong points against Eliezer’s way of talking about this issue.
Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you’re part of.
Yup, and so long as I’m going to be a moral absolutist anyway, why be that sort of moral absolutist?
I agree that it seems as though I just don’t understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don’t understand.
I don’t claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don’t feel strongly that saving babies is “right”, whenever you write, “saving babies is the right thing to do”, I translate this as, “X is the right thing to do” where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.
Then you write, “What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness.”
How is wrongness or rightness baked into a subject matter?
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
That would be epistemic preferences. It’s epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.