1) claims that morality isn’t arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
The confusion is resolved by realizing that he defines the words “moral” and “good” as roughly equivalent to human CEV.
So according to Eliezer, it’s not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It’s that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn’t think paperclips are good...it simply doesn’t care about good, instead pursuing paperclippy.
Thus, moral relativism can be decried while “no universally compelling arguments” can be defended. Under this semantic structure, Paperclipper will just say “okay, sure...killing is immoral, but I don’t really care as long as it’s paperclippy.”
Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.
It’s an entirely semantic confusion.
I suggest that ethicists aught to have different words for the various different rigorized definitions of Good to avoid this sort of confusion. Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
Edit: At the very least, CEV is one rigorization of Eliezer-Good, even if it doesn’t articulate everything about it. There are multiple levels of rigor and naivety that may be involved here. Eliezer-good is more rigorous than “good” but might not capture all the subtleties of the naive conception. CEV is more rigorous than Eliezer-good, but it might not capture the full range of subtleties within Eliezer-good (and it’s only one of multiple ways to rigorize Eliezer-good...consider Coherent Aggregate Volition, for example, as an alternative rigorization of Eliezer-good).
I think what confuses people is that he
1) claims that morality isn’t arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
How does this differ from gustatory preferences?
1a) My preference for vanilla over chocolate ice cream is not arbitrary—I really do have that preference, and I can’t will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference ‘arbitrary’ is like calling gravitation or pencils ‘arbitrary’, and carries no sting.
1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.
2) There is no argument that could force any and all possible minds to like vanilla ice cream.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
It’s an entirely semantic confusion.
I don’t know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
so I don’t see where the confusion is.
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is.
Your analysis clearly describes some of my understanding of what EY says. I use “yummy” as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for “normal” humans. Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The “no universally compelling argument” applies to Clippy versus us, but is there also no universally compelling argument with all of “us” as well?
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
I take Eliezer to hold something like the latter—moral judgments aren’t about people’s attitudes simpliciter: they’re about what they would be if people were perfectly rational and had perfect information (he’s hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ.
If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don’t vary with individuals or groups, not when it varies with empirically discoverable facts.
You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn’t mean that any two people will necessarily have a different morality, but why would I assert that?
Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim—or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
But, EY also seems to be making a universalist argument, as least for “normal” humans.
If you have in mind ‘human universals’ when you say ‘universality’, that’s easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.
Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The brain is a computer, hence it runs ‘abstract computations’. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract ‘fiveness’. If it’s mysterious in the case of human morality, it’s not only equally mysterious in the case of all recurrent physical processes; it’s equally mysterious in the case of all recurrent physical anythings.
Some philosophers would say that brain computations are both subjective and objective—metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, ‘metaphysical subjectivity’ is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn’t any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is ‘subjective’.
I don’t know anymore what you mean by ‘universalism’.
is there also no universally compelling argument with all of “us” as well?
There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)
Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
This leaves out the “rigid designator” bit that people are discussing up-thread. Your formulation invites the response, “So, if our CEV were different, then different things would be good?” Eliezer wants the answer to this to be “No.”
Perhaps we can say that “Eliezer-Good” is roughly synonymous to “Our CEV as it actually is in this, the actual, world as this world is right now.”
Thus, if our CEV were different, we would be in a different possible world, and so our CEV in that world would not determine what is good. Even in that different, non-actual, possible world, what is good would be determined by what our actual CEV says is good in this, the actual, world.
Yes? But the recipient of an “argument” is implicitly an agent who at least partially understands epistemology. There is not much point in talking about agents which aren’t rational or at least partly-bounded-rational-ish. Completely insensible things are better modeled as objects, not agents, and you can’t argue with an object.
By Eliezer’s usage? I’d say aliens might have love and pleasure in the same way that aliens might have legs...they just as easily might not. Think “wolf” vs “snake”—one has legs and feels love while the other does not.
1) claims that morality isn’t arbitrary and we can make definitive statements about it
That isn’t non-relativism. Subjectivism is the claim that the truth of moral statements varies with the person making them. That is compatible with the claim that they are non-arbitrary, since they may be fixed by features of persons that they cannot change, and which can be objectively discovered. It isn’t a particularly strong version of subjectivism, though.
2) Also claims no universally compelling arguments.
That is;’t non-realism. Non-realism means that there are no arguments or evidence that will compel suitably equipped and motivated agents.
The confusion is resolved by realizing that he defines the words “moral” and “good” as roughly equivalent to human CEV.
The CEV of individual humans, or humanity? You have been ambiguous about an important subject EY is also ambiguous about.
I’m ambiguous about it because I’m describing EY’s usage of the word, and he’s been ambiguous about it.
I typically adapt my usage to the person who I’m talking to, but the way that I typically define “good” in my own head is: “The subset of my preferences which do not in any way reference myself as a person”...or in other words, the behavior which I would prefer if I cared about everyone equally (If I was not selfish and didn’t prefer my in-group).
Under my usage, different people can have different conceptions of good. “Good” is a function of the agent making the judgement.
A pebble-sorter might selfishly want to make every pebble pile themselves, but they also might think that increasing the total number of pebble piles in general is “good”. Then, according to the Pebblesorters, a “good” pebble-sorter would put overall-prime-pebble-pile-maximization above their own personal -prime-pebble-pile-productivity. According to the Babyeaters, “good” baby-eater would eat babies indiscriminately, even if they selfishly might want to spare their own. According to humans, Pebble sorter values are alien and baby-eater values are evil.
I think you’re right here. He’s saying, in a way, that moral absolutism only makes sense within context. Hence metaethics. It’s kinda hard to wrap one’s head around but it does make sense.
The question of what EY means is entangled with the question of why he thinks it’s true.
This account of his meaning
It’s that love, pleasure, and equality are part of the definition of good
..is pretty incredible as an argument, because it appears to be an argument by definition...in fact, an argument by normative and novel definition...and he hates arguments by definition.
Well, even if they are not all bad , his argument-by-definition is not one of the good ones, because it’s not based on an accepted or common definition. Inasmuch as it’s both a novel theory, and based on a definition, it’s based on a novel definition.
I think what confuses people is that he
1) claims that morality isn’t arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
The confusion is resolved by realizing that he defines the words “moral” and “good” as roughly equivalent to human CEV.
So according to Eliezer, it’s not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It’s that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn’t think paperclips are good...it simply doesn’t care about good, instead pursuing paperclippy.
Thus, moral relativism can be decried while “no universally compelling arguments” can be defended. Under this semantic structure, Paperclipper will just say “okay, sure...killing is immoral, but I don’t really care as long as it’s paperclippy.”
Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.
It’s an entirely semantic confusion.
I suggest that ethicists aught to have different words for the various different rigorized definitions of Good to avoid this sort of confusion. Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
Edit: At the very least, CEV is one rigorization of Eliezer-Good, even if it doesn’t articulate everything about it. There are multiple levels of rigor and naivety that may be involved here. Eliezer-good is more rigorous than “good” but might not capture all the subtleties of the naive conception. CEV is more rigorous than Eliezer-good, but it might not capture the full range of subtleties within Eliezer-good (and it’s only one of multiple ways to rigorize Eliezer-good...consider Coherent Aggregate Volition, for example, as an alternative rigorization of Eliezer-good).
How does this differ from gustatory preferences?
1a) My preference for vanilla over chocolate ice cream is not arbitrary—I really do have that preference, and I can’t will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference ‘arbitrary’ is like calling gravitation or pencils ‘arbitrary’, and carries no sting.
1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.
2) There is no argument that could force any and all possible minds to like vanilla ice cream.
I raise the analogy because it seems an obvious one to me, so I don’t see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics—as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation—facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
I don’t know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI’s utility function, “sugar is sweet, love is good”. There is one correct definition of Good. “Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips”.
Good[2] : An individual’s morality, a special subset of an agent’s utility function (especially the subset that pertains to how everyone aught to act). “I feel sugar is yummy, but I don’t mind if you don’t agree. However, I feel love is good, and if you don’t agree we can’t be friends.”… “Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good”. (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
Do you see what I mean by “semantic” confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it’s difficult to see that the maps are nearly identical.
I’m suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those “If a tree falls in the forest does it make a sound” debates, which are utterly useless because they center entirely around the definition of sound.
Yup, I agree completely, that’s exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn’t disagree with you even though he uses Good[1].
Most people’s thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
Your analysis clearly describes some of my understanding of what EY says. I use “yummy” as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for “normal” humans. Because he talks about abstract computation, leaving particular brains behind, it’s just unclear to me whether he’s a subjectivist or a universalist.
The “no universally compelling argument” applies to Clippy versus us, but is there also no universally compelling argument with all of “us” as well?
“Universalist” and “Subjectivist” aren’t opposed or conflicting terms. “Subjective” simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is “objective”. “Universalist” and “relativist” are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
I take Eliezer to hold something like the latter—moral judgments aren’t about people’s attitudes simpliciter: they’re about what they would be if people were perfectly rational and had perfect information (he’s hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don’t vary with individuals or groups, not when it varies with empirically discoverable facts.
Subjectivism does not require that morality varies with individuals.
No, see the link above.
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn’t mean that any two people will necessarily have a different morality, but why would I assert that?
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim—or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
There are people who use “subjective” to mean “mental”, but they sholudn’t.
If you have in mind ‘human universals’ when you say ‘universality’, that’s easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.
The brain is a computer, hence it runs ‘abstract computations’. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract ‘fiveness’. If it’s mysterious in the case of human morality, it’s not only equally mysterious in the case of all recurrent physical processes; it’s equally mysterious in the case of all recurrent physical anythings.
Some philosophers would say that brain computations are both subjective and objective—metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, ‘metaphysical subjectivity’ is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn’t any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is ‘subjective’.
I don’t know anymore what you mean by ‘universalism’.
There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)
This leaves out the “rigid designator” bit that people are discussing up-thread. Your formulation invites the response, “So, if our CEV were different, then different things would be good?” Eliezer wants the answer to this to be “No.”
Perhaps we can say that “Eliezer-Good” is roughly synonymous to “Our CEV as it actually is in this, the actual, world as this world is right now.”
Thus, if our CEV were different, we would be in a different possible world, and so our CEV in that world would not determine what is good. Even in that different, non-actual, possible world, what is good would be determined by what our actual CEV says is good in this, the actual, world.
Both these statements are also true about physics, yet nobody seems to be confused about it in that case.
What do you mean? Rational agents aught to converge upon what physics is.
Only because that’s considered part of the definition of “rational agent”.
Yes? But the recipient of an “argument” is implicitly an agent who at least partially understands epistemology. There is not much point in talking about agents which aren’t rational or at least partly-bounded-rational-ish. Completely insensible things are better modeled as objects, not agents, and you can’t argue with an object.
And can aliens have love and pleasure, or is Good a purely human concept?
By Eliezer’s usage? I’d say aliens might have love and pleasure in the same way that aliens might have legs...they just as easily might not. Think “wolf” vs “snake”—one has legs and feels love while the other does not.
Let’s say they have love and pleasure. Then why would want to define morality in a human centric way?
That isn’t non-relativism. Subjectivism is the claim that the truth of moral statements varies with the person making them. That is compatible with the claim that they are non-arbitrary, since they may be fixed by features of persons that they cannot change, and which can be objectively discovered. It isn’t a particularly strong version of subjectivism, though.
That is;’t non-realism. Non-realism means that there are no arguments or evidence that will compel suitably equipped and motivated agents.
The CEV of individual humans, or humanity? You have been ambiguous about an important subject EY is also ambiguous about.
I’m ambiguous about it because I’m describing EY’s usage of the word, and he’s been ambiguous about it.
I typically adapt my usage to the person who I’m talking to, but the way that I typically define “good” in my own head is: “The subset of my preferences which do not in any way reference myself as a person”...or in other words, the behavior which I would prefer if I cared about everyone equally (If I was not selfish and didn’t prefer my in-group).
Under my usage, different people can have different conceptions of good. “Good” is a function of the agent making the judgement.
A pebble-sorter might selfishly want to make every pebble pile themselves, but they also might think that increasing the total number of pebble piles in general is “good”. Then, according to the Pebblesorters, a “good” pebble-sorter would put overall-prime-pebble-pile-maximization above their own personal -prime-pebble-pile-productivity. According to the Babyeaters, “good” baby-eater would eat babies indiscriminately, even if they selfishly might want to spare their own. According to humans, Pebble sorter values are alien and baby-eater values are evil.
I think you’re right here. He’s saying, in a way, that moral absolutism only makes sense within context. Hence metaethics. It’s kinda hard to wrap one’s head around but it does make sense.
The question of what EY means is entangled with the question of why he thinks it’s true.
This account of his meaning
..is pretty incredible as an argument, because it appears to be an argument by definition...in fact, an argument by normative and novel definition...and he hates arguments by definition.
Well, even if they are not all bad , his argument-by-definition is not one of the good ones, because it’s not based on an accepted or common definition. Inasmuch as it’s both a novel theory, and based on a definition, it’s based on a novel definition.