I think what confuses people is that he
1) claims that morality isn’t arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
How does this differ from gustatory preferences?
1a) My preference for vanilla over chocolate ice cream is not arbitrary—I really do have that preference, and I can’t will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference ‘arbitrary’ is like calling gravitation or pencils ‘arbitrary’, and carries no sting.
1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.
2) There is no argument that could force any and all possible minds to like vanilla ice cream.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
It’s an entirely semantic confusion.
I don’t know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
so I don’t see where the confusion is.
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is.
Your analysis clearly describes some of my understanding of what EY says. I use “yummy” as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for “normal” humans. Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The “no universally compelling argument” applies to Clippy versus us, but is there also no universally compelling argument with all of “us” as well?
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
I take Eliezer to hold something like the latter—moral judgments aren’t about people’s attitudes simpliciter: they’re about what they would be if people were perfectly rational and had perfect information (he’s hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ.
If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don’t vary with individuals or groups, not when it varies with empirically discoverable facts.
You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn’t mean that any two people will necessarily have a different morality, but why would I assert that?
Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim—or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
But, EY also seems to be making a universalist argument, as least for “normal” humans.
If you have in mind ‘human universals’ when you say ‘universality’, that’s easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.
Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The brain is a computer, hence it runs ‘abstract computations’. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract ‘fiveness’. If it’s mysterious in the case of human morality, it’s not only equally mysterious in the case of all recurrent physical processes; it’s equally mysterious in the case of all recurrent physical anythings.
Some philosophers would say that brain computations are both subjective and objective—metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, ‘metaphysical subjectivity’ is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn’t any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is ‘subjective’.
I don’t know anymore what you mean by ‘universalism’.
is there also no universally compelling argument with all of “us” as well?
There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)
How does this differ from gustatory preferences?
1a) My preference for vanilla over chocolate ice cream is not arbitrary—I really do have that preference, and I can’t will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference ‘arbitrary’ is like calling gravitation or pencils ‘arbitrary’, and carries no sting.
1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.
2) There is no argument that could force any and all possible minds to like vanilla ice cream.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
I don’t know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
Your analysis clearly describes some of my understanding of what EY says. I use “yummy” as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for “normal” humans. Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The “no universally compelling argument” applies to Clippy versus us, but is there also no universally compelling argument with all of “us” as well?
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
I take Eliezer to hold something like the latter—moral judgments aren’t about people’s attitudes simpliciter: they’re about what they would be if people were perfectly rational and had perfect information (he’s hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don’t vary with individuals or groups, not when it varies with empirically discoverable facts.
Subjectivism does not require that morality varies with individuals.
No, see the link above.
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn’t mean that any two people will necessarily have a different morality, but why would I assert that?
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim—or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
There are people who use “subjective” to mean “mental”, but they sholudn’t.
If you have in mind ‘human universals’ when you say ‘universality’, that’s easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.
The brain is a computer, hence it runs ‘abstract computations’. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract ‘fiveness’. If it’s mysterious in the case of human morality, it’s not only equally mysterious in the case of all recurrent physical processes; it’s equally mysterious in the case of all recurrent physical anythings.
Some philosophers would say that brain computations are both subjective and objective—metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, ‘metaphysical subjectivity’ is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn’t any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is ‘subjective’.
I don’t know anymore what you mean by ‘universalism’.
There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)