To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
But by Eliezer’s standards, it’s impossible for anyone to be a relativist about anything.
Consider what Einstein means when he says time and space are relative. He doesn’t mean you can just say whatever you want about them, he means that they’re relative to a certain reference frame. An observer on Earth may think it’s five years since a spaceship launched, and an observer on the spaceship may think it’s only been one, and each of them is correct relative to their reference frame.
We could define “time” to mean “time as it passes on Earth, where the majority of humans live.” Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said “One year has passed” would be wrong; he’d really mean “One s-year has passed.” Then we could say time and space weren’t really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to “How much time has passed” would be “Five years.”
Does that mean time isn’t really relative? Or does it just mean there’s a way to describe it that doesn’t use the word “relative”?
Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word “easy” is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it’s mo wakariyasui translated as “j-easy”, which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.
Again, it’s just avoiding the word “relative” by talking in a confusing and unnatural way. And I don’t see the difference between talking about “easy” vs. “j-easy” and talking about “right” vs. “p-right”.
Again, it’s just avoiding the word “relative” by talking in a confusing and unnatural way. And I don’t see the difference between talking about “easy” vs. “j-easy” and talking about “right” vs. “p-right”.
The reason people think that Eliezer is really a relativist is that they see concepts like “good” and “right” as reducing down to mean, “the thing that I [the speaker, whoever it is] values.” Eliezer is arguing that that is not what they reduce down to. He argues that “good” and “right” reduce down to something like “concepts related to enhancing the wellbeing of conscious eudaemonic life forms.” It’s not a trick of the language, Eliezer is arguing that “right” refers to [wellbeing related concept] and p-right refers to [primality sorting related concept]. The words “good” and “right” might be relative but the referent [wellbeing of conscious eudaemonic life forms] is not. The reason Eliezer focuses on fairness is that the concept of fairness is less nebulous than the concept of “right” so it is easier to see that it is not arbitrary.
Pebble sorters and humans can both objectively agree on what it means to enhance the wellbeing of conscious eudaemonic life forms. Where they differ is whether they care about doing it. Pebble sorters don’t care about the wellbeing of others. Why would they, unless it happened to help them sort pebbles?
Similarly, humans and pebble sorters can both agree on which pebble heaps are prime-numbered. Where they differ is if they care about sorting pebbles. Humans don’t care about pebble-sorting. Why would they, unless it helped then enhance the wellbeing of themselves and others?
So if you define morality as “the thing that I care about,” then I suppose it is relative, although I think that is not a proper use of the word “morality.” But if you define it as “enhancing the wellbeing of eudaemonic life forms” then it is quite objective.
Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.
So what is the difference between easy vs j-easy and right vs p-right? Well, easy and j-easy both refer to the concept “can be done with little effort expended, even by someone who is completely new and unpracticed in it.” English is not “easy” because only those practiced in it can speak it with little effort expended. Ditto for Japanese. The concept is the same in both languages. “Right,” by contrast, refers to “enhances wellbeing of eudaemonic creatures,” while p-right refers to “sorting pebbles in prime numbered heaps” They are two completely different concepts and that fact has nothing to do with the language being used.
Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word “easy” is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it’s mo wakariyasui translated as “j-easy”, which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.
Until I read that, I thought I understood (and agreed with) Eliezer’s point, but that got me thinking. Now, I guess Eliezer would agree that it’s easy for Japanese people to speak Japanese, while he wouldn’t agree that it’s right for Baby-Eaters to keep on eating their children. So there must be something subtler I’m missing.
FWIW, my understanding of the original claim was precisely that morality is special in this way: that it means something to describe what humans value as “right” compared to what nonhumans value (and what nobody values), whereas it doesn’t mean anything analogous to describe the languages humans speak as “easily speakable” compared to the languages nonhumans speak (and the languages nobody speaks). And whatever that something is, eating babies simply doesn’t possess it, even for Baby-Eaters.
Personally I’ve never understood what that something might be, though, nor seen any evidence that it exists.
Have you forgotten that what it means to describe something by a word is given precisely by the sense of that word that the speaker has in mind? That you can call eudaimonia “right”, and heaps of prime pebbles “prime” is a fact about the words “right” and “prime” as used by humans, not about eudaimonia and pebbles themselves (except insofar as eudaimonia and prime-pebbled heaps by their nature satisfy the relevant definitions of “right” and “prime”, of course). Is English the easiest language, if you define “easiest” as “easiest for an English-speaker to speak”? How many legs does a dog have, if you call a tail a leg?
When I assert “eudaimonia is right” (supposing I believed that), there exist two structures in my brain, S1 and S2, such that S1 is tagged with the lexical entry “right” and S2 is tagged with the lexical entry “eudaimonia”, and S1 and S2 are related such that if my brain treats some thing X as an instance of S2, it also treats X as having the property S1.
I was going to ask how that relation came about, and how it behaves when your brain is computing counterfactuals… but even though those are good questions to consider, I realised that wouldn’t really be that helpful. So...
What I’m really trying to say is that there’s nothing special about morality at all. There doesn’t have to be anything special about it for eudaimonia to be right and for pebble-sorting to not. It’s just a concept, like every other concept. One that includes eudaimonia and excludes murder, and is mostly indifferent to pebble-sorting. Same as the concept prime includes {2, 3, 5, 7, …} and excludes {4, 6, 8, 9, …}.
The only thing remotely “special” about it is that it happens to be a human terminal value—which is the only reason we care enough to talk about it in the first place. The only thing remotely special about the word “right” is that it happens to mean, in english, this morality-concept (which happens to be a human terminal value).
So, to say that “eudaimonia is right” is simply to assert that eudaimonia is included in this set of things that includes eudaimonia and excludes murder (in other words, yes, “X ∈ S2 implies X ∈ S1″, where S2 is eudaimonia and S1 is morality). To say that what babyeaters value is right would be to assert that eating babies is included in this set (“X ∈ babyeating implies X ∈ S1”, which is clearly wrong, since murder ∈ babyeating).
So, let us assume there exists some structure S3 in my head that implements my terminal values.
Maybe that’s eudaimonia, maybe that’s Hungarian goulash, I don’t really know what it is, and am not convinced that it’s anything internally coherent (that is, I’m perfectly prepared to believe that my S3 includes mutually exclusive states of the world).
I agree that when I label S3 “morality” I’m doing just what I do when I label S2 “eudaimonia” or label some other structure “prime”. There’s nothing special about the label “morality” in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s “morality,” then we mean the same thing by “morality.” Awesome.
If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label “morality”, then we might mean different things by “morality”.
Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.
I can keep associating S3 with the label “morality” or “right” and apply some other label to S4 (e.g., “pseudo-morality” or “nsheppard’s right” or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label “morality” or “right” is that it might happens to refer to human terminal value, then it follows that there’s nothing special about that label in that case, since in that case it doesn’t refer to a common terminal value. It’s just another word.
Conversely, I can choose to associate the label “morality” or “right” with some new S5, a synthesis of S3 and S4… perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that “morality” means S5, even though S5 does not implement either of our terminal values.
I generally agree with everything you say here, except that I’d like to clarify what you mean by “special” when you say that morality need not be special, as I’m not sure it would be clear to everyone reading your post. Obviously morality has no mystical properties or anything. It isn’t special in that sense, which is what I think you mean.
But morality does differ (in a totally nonmystical way) from many other terminal values in being what Eliezer calls “subjectively objective and subjunctively objective.” That is, there is only one way, or at least an extremely limited number of ways, to do morality correctly. Morality is not like taste, it isn’t different for every person.
You obviously already know this, but I think that it’s important to make that point clear because this subject has hugeinferential distances. Hooray for motivational externalism!
Yeah, it’s precisely the assumption that the computation we refer to by “morality” is identical for every human that makes this whole approach feel inadequate to me. It’s just not clear to me that this is true, and if it turns out not to be true, then we’re faced with the problem of reconciling multiple equally valid moralities.
Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.
Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.
If morality refers to a large computation related to the wellbeing of eudaemonic creatures it might be possible that some people value different aspects of wellbeing more than others (i.e. some people might care more about freedom, others more about harm). But there’d still be a huge amount of agreement.
I think a good analogy is with the concept of “health.” It’s possible for people to care about different aspects of health more. Some people might care more about nutrition, others about exercise. But there are very few ways to be healthy correctly, and near infinite ways to be unhealthy. And even if someone thinks you have your priorities wrong when trying to be healthy, they can still agree that your efforts are making you healthier than no effort at all.
Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.
I care about the wellbeing of animals to some extent, even though most of them don’t care about morality at all. I also care, to a limited extent, about the wellbeing of sociopathic humans even though they don’t care about morality at all. I admit that I don’t care about them as much as I do about moral beings, but I do care.
If other moral humans have slightly different moral priorities from you I think they’d still be worth a great deal of caring. Especially if you care at all about animals or sociopaths, who are certainly far less worthy of consideration than people who merely disagree with you about some aspect of morality.
I agree that we should expect significant (though not complete) overlap within the set of moral judgments made by all humans.
I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.
I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.
I agree that if statement X (e.g. “murder is wrong”) is endorsed by all the moral judgments in a particular set, then the agents making those judgments will all agree that X is right, although perhaps to different degrees depending on peripheral particulars. Similarly, if statement Y is not endorsed by all the moral judgments in a particular set, then the agents making those judgments will not all agree that X is right.
It’s clear in the first case that right action is to abide by the implications of X. In the second case, it’s less clear what right action is.
I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.
I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.
I think the point I am trying to get across, and one of the major points made by Eliezer in this sequence is that some of the things you are referring to as moral judgements aren’t really moral judgements. Eliezer is basically saying that when you make a moral judgement you are making computations about various aspects of the wellbeing of eudaemonic creatures. A judgement that refers to the huge and complex concept “the wellbeing of eudaemonic creatures” is a moral judgement. A judgement that refers to some other concept is not a moral judgement, even if we use the same word to describe each.
When a sociopath says “It is good for me to kill people” he is not making a moral judgement. That is, he is not making computations related to the wellbeing of people. Quite the contrary, he is completely ignoring the wellbeing of everyone but himself. Calling what he does a moral judgement obscures the issue.
Similarly, when the pebblesorter says “It is good for pebbles to be sorted into prime numbered heaps” it is not making a moral judgement. It isn’t doing computations about the wellbeing of people, it’s doing computations about the numbers of pebbles.
You, the sociopath, and the pebblesorter are not referring to the same concepts. You are referring to the wellbeing of people, the sociopath is referring to the gratification of his impulse, the pebblesorter is referring to the primality of pebble heaps. The phrase “moral judgement” should probably not be used to refer to all these different types of judgements, as they are not judgements about the same concepts at all.
I would submit that if you removed the word “moral” and asked a pebblesorter “What action would best enhance the wellbeing of eudaemonic creatures” you and the pebblesorter would agree about quite a lot. The pebblesorter would then go back to sorting pebbles because it doesn’t care about the wellbeing of eudaemonic creatures. (obviously this thought experiment would not work for a sociopath because sociopaths evolved to impersonate moral people, so they would never give an honest answer).
I think most moral disagreement among creatures who care about the wellbeing of others is a case of the blind men and the elephant. People disagree because wellbeing is a complex concept and it is possible to focus on one aspect of it at the expense of others (see scope insensitivity). Another source is self deception, people want to do immoral things, but still think of themselves as moral, so they fool themselves. A final source is that, some people may genuinely care more about some aspects of wellbeing more than other people even if you remove scope insensitivity. It is only that last kind of disagreement that is irresolvable, and as I said before, it is a case for moral pluralism, not moral relativism.
For convenience, I am using the abbreviation “woec” for “wellbeing of eudaemonic creatures”.
I agree that if I asked a pebblesorter “What action would best enhance woec”, assuming we could work out a shared definition of “eudaemonic”, we would agree about quite a lot.
If a pebblesorter asked me “What action would maximize prime-numbered heaps?” we would also agree about quite a lot.
If we were to both answer the question “What action would optimize for my values?” our answers would be almost completely unrelated.
I am willing to stop using the phrase “moral judgments” in this discussion to refer to judgments about what best implements the judger’s values. This is entirely because disagreements about lexical usage are rarely productive when what we’re really interested in is the referents. That said, I also prefer in that case to avoid using the phrase “moral judgments” to refer to judgments about what best achieves woec, since I don’t actually use the phrase to mean that, which will get confusing. In fact, it’s perhaps best to avoid the phrase altogether.
I agree that a lot of disagreements about what action would best enhance woec, among creatures who value woec, is a blind men and the elephant problem.
I agree that humans often want to do things that would not best enhance woec, even when we are aware that the thing we want to do would not best enhance woec.
I agree that even among creatures who care about woec, there may not be agreement about values.
I agree that when creatures whose values matter to me don’t share values, I do well to embrace value pluralism.
That said, I also prefer in that case to avoid using the phrase “moral judgments” to refer to judgments about what best achieves woec, since I don’t actually use the phrase to mean that, which will get confusing. In fact, it’s perhaps best to avoid the phrase altogether.
If you really think that the phrase “moral judgements” is a useless and ambiguous phrase and that we shouldn’t use it, I can respect that. But if enhancing weoc isn’t what we should use the phrase “morality” to describe then what is? You also seem to nominate “optimizing for my values” as an alternative referent, but that doesn’t seem right to me. Sociopaths are generally regarded as gravely immoral, even if they efficiently implement their values, because they don’t care about the wellbeing of others. Should we really just jettison the word “morality” altogether?
I suppose that could work. Since I’ve read Eliezer’s work I’ve found that I can make the same points by substituting naturalistic statements for ones that use the word “moral.” For instance, saying “The world would be a happier place if X didn’t exist” is technically a naturalistic statement containing no value judgements, I use that a lot. But it seems like a shame to stop using such a powerful and effective word.
It’s not that I think the phrase is useless; it has many uses.
It’s that I think we use it to mean such different things I think this conversation is not well-served by introducing it. (You use it to refer to judgments related to weoc, I use it to refer to judgments related to the judger’s values.)
Yes, I would say that sociopaths make moral judgments, although their moral judgments differ from mine. I realize you would not say this, not because we disagree about sociopaths, but because we disagree about whether what sociopaths make can properly be labelled “moral judgment”.
I don’t think the labeling question is terribly important or interesting. As you say, “moral” can usefully be cashed out in other terms.
You mean, the assumption that every human uses the word “morality” to refer to the same computation. Clearly, if I use “morality” to refer to X, and you also use the word “morality” to refer to X, then X and X are identical trivially. We refer to the same thing. Keep careful track of the distinction between quotation and referent.
Anyway, before I answer, consider this...
If other people use “morality” to refer to something else… then what? How could it matter how other people use words?
I agree that if you and I both use “morality” to refer to X, then we refer to the same thing.
If I use “morality” to refer to X1 and you use it to refer to X2, it doesn’t matter at all, unless we try to have a conversation about morality. Then it can get awfully confusing. Similar things are true if I use “rubber” to refer to a device for removing pencil marks from paper, and you use “rubber” to refer to a prophylactic device… it’s not a problem at all, unless I ask you to fetch a bunch of rubbers from the supply cabinet for an upcoming meeting.
But what does that mean for what you should do? Nothing, right? It doesn’t matter that someone else uses “morality” to refer to X2. If I call murder “right”, murder is still wrong. And you should still lock up murderers.
Talking about the truth-value of the assertion “murder is right” seems unjustified at this point, much like the truth-value of “rubbers help prevent pregnancy.” Is it true? Yes. Is it false? Yes. When a word means different things within a conversation, ambiguity is introduced to many sentences containing that word. It helps at that point to set aside the ambiguous label and introduce more precise ones. Which is why I introduced X1 and X2 in the first place.
I agree that the fact that X1 rejects murder doesn’t necessarily change just because X2 endorses it.
But I don’t agree that what X1 endorses is necessarily independent of what X2 endorses.
For example, if I don’t value the existence of Gorgonzola in the world, and I value your preferences being satisfied, then I value Gorgonzola IFF you prefer there exist Gorgonzola in the world.
To the extent that what I should do is a function of what I value, and to the extent that X2 relates to your preferences, then X2 (what you call “right”) has a lot to do with what I should do.
The assertion “murder is right”—by your definition of “right”, which is the only definition you should care about, being the person who formulates the question “what is right for me to do?”—has a value of TRUE precisely if X1 endorses murder. There’s nothing unjustified about saying that, since X1 was brought in specifically defined as the thing your definition of “right” refers to.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
your definition of “right”, which is the only definition you should care about...
It is not clear to me that X1 is the only definition of “right” I should care about, even if it is mine… any more than thing-to-erase-pencil-marks-with is the only definition of “rubber” I should care about.
Regardless, whether I should care about other people’s definitions of these words or not, the fact remains that I do seem to care about it.
And I also seem to care about other people’s preferences being satisfied, especially the preferences that they associate with the emotional responses that lead them to talk about that preference being “right” (rather than just “my preference”).
Again, maybe I oughtn’t… though if so, it’s not clear to me why… but nevertheless I do.
...being the person who formulates the question “what is right for me to do?”
It may be relevant that this is not the only moral question I formulate. Other moral questions include “what is right for others to do?” and “what is right to occur?” Indeed, that last one is far more important to me than the others, which is one reason I consider myself mostly a consequentialist.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
Any question you could possibly want the answer to relating in any sense to “rightness” is not a question at all unless you have a definition of “right” in mind (or at the least a fuzzy intuitive definition that you don’t have full access to). You want to know “what is right to occur”. You won’t get anywhere unless you have an inkling of what you meant by “right”. It’s built into the question that you are looking for the answer to your question. It’s your question!
Maybe you decide that X1 (which is the meaning of your definition of “right”) includes, among things such as “eudaimonia” and “no murder”, “other humans getting what they value”. Then the answer to your question is that it’s right for people to experience eudaimonia and to not be murdered, and to get what they value. And the answer to “what should I do” is that you should try and bring those things about.
Or maybe I decide that X1 doesn’t include other humans getting what they value, and I’m only under the impression that it does because there are some things that other humans happen to value that X1 does include, or because X1 includes something that is similar but not quite identical to other humans getting what they value, or for some other reason.
Either way, whichever of those things turns out to be the case, that’s what I should do… agreed (1).
Of course, in some of those cases (though not others), in order to work out what that is in practice, I also need to know what other humans’ equivalents of X1 are. That is, if it turns out X1 includes you getting what you value as long as you’re alive, and what you value is given by X2, then as long as you’re alive I should bring about X2 as well as X1. And in this scenario, when you are no longer alive, I no longer should bring about X2.
==== (1) Or, well, colloquially true, anyway. I should certainly prefer those things occurring, but whether I should do anything in particular, let alone try to do anything in particular, is less clear. For example, if there exists a particularly perverse agent A who is much more powerful than I, and if A is such that A will bring about what I value IFF I make no efforts whatsoever towards bringing them about myself, then it follows that what I ought to do is make no efforts whatsoever towards bringing them about. It’s not clear that I’m capable of that, but whether I’m capable of it or not it seems clear that it’s what I ought to do. Put a different way, in that situation I should prefer to be capable of doing so, if it turns out that I’m not.
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.
But by Eliezer’s standards, it’s impossible for anyone to be a relativist about anything.
Consider what Einstein means when he says time and space are relative. He doesn’t mean you can just say whatever you want about them, he means that they’re relative to a certain reference frame. An observer on Earth may think it’s five years since a spaceship launched, and an observer on the spaceship may think it’s only been one, and each of them is correct relative to their reference frame.
We could define “time” to mean “time as it passes on Earth, where the majority of humans live.” Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said “One year has passed” would be wrong; he’d really mean “One s-year has passed.” Then we could say time and space weren’t really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to “How much time has passed” would be “Five years.”
Does that mean time isn’t really relative? Or does it just mean there’s a way to describe it that doesn’t use the word “relative”?
Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word “easy” is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it’s mo wakariyasui translated as “j-easy”, which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.
Again, it’s just avoiding the word “relative” by talking in a confusing and unnatural way. And I don’t see the difference between talking about “easy” vs. “j-easy” and talking about “right” vs. “p-right”.
The reason people think that Eliezer is really a relativist is that they see concepts like “good” and “right” as reducing down to mean, “the thing that I [the speaker, whoever it is] values.” Eliezer is arguing that that is not what they reduce down to. He argues that “good” and “right” reduce down to something like “concepts related to enhancing the wellbeing of conscious eudaemonic life forms.” It’s not a trick of the language, Eliezer is arguing that “right” refers to [wellbeing related concept] and p-right refers to [primality sorting related concept]. The words “good” and “right” might be relative but the referent [wellbeing of conscious eudaemonic life forms] is not. The reason Eliezer focuses on fairness is that the concept of fairness is less nebulous than the concept of “right” so it is easier to see that it is not arbitrary.
Pebble sorters and humans can both objectively agree on what it means to enhance the wellbeing of conscious eudaemonic life forms. Where they differ is whether they care about doing it. Pebble sorters don’t care about the wellbeing of others. Why would they, unless it happened to help them sort pebbles?
Similarly, humans and pebble sorters can both agree on which pebble heaps are prime-numbered. Where they differ is if they care about sorting pebbles. Humans don’t care about pebble-sorting. Why would they, unless it helped then enhance the wellbeing of themselves and others?
So if you define morality as “the thing that I care about,” then I suppose it is relative, although I think that is not a proper use of the word “morality.” But if you define it as “enhancing the wellbeing of eudaemonic life forms” then it is quite objective.
Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.
So what is the difference between easy vs j-easy and right vs p-right? Well, easy and j-easy both refer to the concept “can be done with little effort expended, even by someone who is completely new and unpracticed in it.” English is not “easy” because only those practiced in it can speak it with little effort expended. Ditto for Japanese. The concept is the same in both languages. “Right,” by contrast, refers to “enhances wellbeing of eudaemonic creatures,” while p-right refers to “sorting pebbles in prime numbered heaps” They are two completely different concepts and that fact has nothing to do with the language being used.
Until I read that, I thought I understood (and agreed with) Eliezer’s point, but that got me thinking. Now, I guess Eliezer would agree that it’s easy for Japanese people to speak Japanese, while he wouldn’t agree that it’s right for Baby-Eaters to keep on eating their children. So there must be something subtler I’m missing.
FWIW, my understanding of the original claim was precisely that morality is special in this way: that it means something to describe what humans value as “right” compared to what nonhumans value (and what nobody values), whereas it doesn’t mean anything analogous to describe the languages humans speak as “easily speakable” compared to the languages nonhumans speak (and the languages nobody speaks). And whatever that something is, eating babies simply doesn’t possess it, even for Baby-Eaters.
Personally I’ve never understood what that something might be, though, nor seen any evidence that it exists.
Have you forgotten that what it means to describe something by a word is given precisely by the sense of that word that the speaker has in mind? That you can call eudaimonia “right”, and heaps of prime pebbles “prime” is a fact about the words “right” and “prime” as used by humans, not about eudaimonia and pebbles themselves (except insofar as eudaimonia and prime-pebbled heaps by their nature satisfy the relevant definitions of “right” and “prime”, of course). Is English the easiest language, if you define “easiest” as “easiest for an English-speaker to speak”? How many legs does a dog have, if you call a tail a leg?
When I assert “eudaimonia is right” (supposing I believed that), there exist two structures in my brain, S1 and S2, such that S1 is tagged with the lexical entry “right” and S2 is tagged with the lexical entry “eudaimonia”, and S1 and S2 are related such that if my brain treats some thing X as an instance of S2, it also treats X as having the property S1.
Well, for a certain use of “is,” anyway.
I was going to ask how that relation came about, and how it behaves when your brain is computing counterfactuals… but even though those are good questions to consider, I realised that wouldn’t really be that helpful. So...
What I’m really trying to say is that there’s nothing special about morality at all. There doesn’t have to be anything special about it for eudaimonia to be right and for pebble-sorting to not. It’s just a concept, like every other concept. One that includes eudaimonia and excludes murder, and is mostly indifferent to pebble-sorting. Same as the concept prime includes {2, 3, 5, 7, …} and excludes {4, 6, 8, 9, …}.
The only thing remotely “special” about it is that it happens to be a human terminal value—which is the only reason we care enough to talk about it in the first place. The only thing remotely special about the word “right” is that it happens to mean, in english, this morality-concept (which happens to be a human terminal value).
So, to say that “eudaimonia is right” is simply to assert that eudaimonia is included in this set of things that includes eudaimonia and excludes murder (in other words, yes, “X ∈ S2 implies X ∈ S1″, where S2 is eudaimonia and S1 is morality). To say that what babyeaters value is right would be to assert that eating babies is included in this set (“X ∈ babyeating implies X ∈ S1”, which is clearly wrong, since murder ∈ babyeating).
So, let us assume there exists some structure S3 in my head that implements my terminal values.
Maybe that’s eudaimonia, maybe that’s Hungarian goulash, I don’t really know what it is, and am not convinced that it’s anything internally coherent (that is, I’m perfectly prepared to believe that my S3 includes mutually exclusive states of the world).
I agree that when I label S3 “morality” I’m doing just what I do when I label S2 “eudaimonia” or label some other structure “prime”. There’s nothing special about the label “morality” in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s “morality,” then we mean the same thing by “morality.” Awesome.
If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label “morality”, then we might mean different things by “morality”.
Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.
I can keep associating S3 with the label “morality” or “right” and apply some other label to S4 (e.g., “pseudo-morality” or “nsheppard’s right” or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label “morality” or “right” is that it might happens to refer to human terminal value, then it follows that there’s nothing special about that label in that case, since in that case it doesn’t refer to a common terminal value. It’s just another word.
Conversely, I can choose to associate the label “morality” or “right” with some new S5, a synthesis of S3 and S4… perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that “morality” means S5, even though S5 does not implement either of our terminal values.
I generally agree with everything you say here, except that I’d like to clarify what you mean by “special” when you say that morality need not be special, as I’m not sure it would be clear to everyone reading your post. Obviously morality has no mystical properties or anything. It isn’t special in that sense, which is what I think you mean.
But morality does differ (in a totally nonmystical way) from many other terminal values in being what Eliezer calls “subjectively objective and subjunctively objective.” That is, there is only one way, or at least an extremely limited number of ways, to do morality correctly. Morality is not like taste, it isn’t different for every person.
You obviously already know this, but I think that it’s important to make that point clear because this subject has huge inferential distances. Hooray for motivational externalism!
Yeah, it’s precisely the assumption that the computation we refer to by “morality” is identical for every human that makes this whole approach feel inadequate to me. It’s just not clear to me that this is true, and if it turns out not to be true, then we’re faced with the problem of reconciling multiple equally valid moralities.
Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.
As I said before:
If morality refers to a large computation related to the wellbeing of eudaemonic creatures it might be possible that some people value different aspects of wellbeing more than others (i.e. some people might care more about freedom, others more about harm). But there’d still be a huge amount of agreement.
I think a good analogy is with the concept of “health.” It’s possible for people to care about different aspects of health more. Some people might care more about nutrition, others about exercise. But there are very few ways to be healthy correctly, and near infinite ways to be unhealthy. And even if someone thinks you have your priorities wrong when trying to be healthy, they can still agree that your efforts are making you healthier than no effort at all.
I care about the wellbeing of animals to some extent, even though most of them don’t care about morality at all. I also care, to a limited extent, about the wellbeing of sociopathic humans even though they don’t care about morality at all. I admit that I don’t care about them as much as I do about moral beings, but I do care.
If other moral humans have slightly different moral priorities from you I think they’d still be worth a great deal of caring. Especially if you care at all about animals or sociopaths, who are certainly far less worthy of consideration than people who merely disagree with you about some aspect of morality.
I agree that we should expect significant (though not complete) overlap within the set of moral judgments made by all humans.
I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.
I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.
I agree that if statement X (e.g. “murder is wrong”) is endorsed by all the moral judgments in a particular set, then the agents making those judgments will all agree that X is right, although perhaps to different degrees depending on peripheral particulars.
Similarly, if statement Y is not endorsed by all the moral judgments in a particular set, then the agents making those judgments will not all agree that X is right.
It’s clear in the first case that right action is to abide by the implications of X.
In the second case, it’s less clear what right action is.
I think the point I am trying to get across, and one of the major points made by Eliezer in this sequence is that some of the things you are referring to as moral judgements aren’t really moral judgements. Eliezer is basically saying that when you make a moral judgement you are making computations about various aspects of the wellbeing of eudaemonic creatures. A judgement that refers to the huge and complex concept “the wellbeing of eudaemonic creatures” is a moral judgement. A judgement that refers to some other concept is not a moral judgement, even if we use the same word to describe each.
When a sociopath says “It is good for me to kill people” he is not making a moral judgement. That is, he is not making computations related to the wellbeing of people. Quite the contrary, he is completely ignoring the wellbeing of everyone but himself. Calling what he does a moral judgement obscures the issue.
Similarly, when the pebblesorter says “It is good for pebbles to be sorted into prime numbered heaps” it is not making a moral judgement. It isn’t doing computations about the wellbeing of people, it’s doing computations about the numbers of pebbles.
You, the sociopath, and the pebblesorter are not referring to the same concepts. You are referring to the wellbeing of people, the sociopath is referring to the gratification of his impulse, the pebblesorter is referring to the primality of pebble heaps. The phrase “moral judgement” should probably not be used to refer to all these different types of judgements, as they are not judgements about the same concepts at all.
I would submit that if you removed the word “moral” and asked a pebblesorter “What action would best enhance the wellbeing of eudaemonic creatures” you and the pebblesorter would agree about quite a lot. The pebblesorter would then go back to sorting pebbles because it doesn’t care about the wellbeing of eudaemonic creatures. (obviously this thought experiment would not work for a sociopath because sociopaths evolved to impersonate moral people, so they would never give an honest answer).
I think most moral disagreement among creatures who care about the wellbeing of others is a case of the blind men and the elephant. People disagree because wellbeing is a complex concept and it is possible to focus on one aspect of it at the expense of others (see scope insensitivity). Another source is self deception, people want to do immoral things, but still think of themselves as moral, so they fool themselves. A final source is that, some people may genuinely care more about some aspects of wellbeing more than other people even if you remove scope insensitivity. It is only that last kind of disagreement that is irresolvable, and as I said before, it is a case for moral pluralism, not moral relativism.
For convenience, I am using the abbreviation “woec” for “wellbeing of eudaemonic creatures”.
I agree that if I asked a pebblesorter “What action would best enhance woec”, assuming we could work out a shared definition of “eudaemonic”, we would agree about quite a lot.
If a pebblesorter asked me “What action would maximize prime-numbered heaps?” we would also agree about quite a lot.
If we were to both answer the question “What action would optimize for my values?” our answers would be almost completely unrelated.
I am willing to stop using the phrase “moral judgments” in this discussion to refer to judgments about what best implements the judger’s values. This is entirely because disagreements about lexical usage are rarely productive when what we’re really interested in is the referents. That said, I also prefer in that case to avoid using the phrase “moral judgments” to refer to judgments about what best achieves woec, since I don’t actually use the phrase to mean that, which will get confusing. In fact, it’s perhaps best to avoid the phrase altogether.
I agree that a lot of disagreements about what action would best enhance woec, among creatures who value woec, is a blind men and the elephant problem.
I agree that humans often want to do things that would not best enhance woec, even when we are aware that the thing we want to do would not best enhance woec.
I agree that even among creatures who care about woec, there may not be agreement about values.
I agree that when creatures whose values matter to me don’t share values, I do well to embrace value pluralism.
I am happy we are on the same page.
If you really think that the phrase “moral judgements” is a useless and ambiguous phrase and that we shouldn’t use it, I can respect that. But if enhancing weoc isn’t what we should use the phrase “morality” to describe then what is? You also seem to nominate “optimizing for my values” as an alternative referent, but that doesn’t seem right to me. Sociopaths are generally regarded as gravely immoral, even if they efficiently implement their values, because they don’t care about the wellbeing of others. Should we really just jettison the word “morality” altogether?
I suppose that could work. Since I’ve read Eliezer’s work I’ve found that I can make the same points by substituting naturalistic statements for ones that use the word “moral.” For instance, saying “The world would be a happier place if X didn’t exist” is technically a naturalistic statement containing no value judgements, I use that a lot. But it seems like a shame to stop using such a powerful and effective word.
It’s not that I think the phrase is useless; it has many uses.
It’s that I think we use it to mean such different things I think this conversation is not well-served by introducing it. (You use it to refer to judgments related to weoc, I use it to refer to judgments related to the judger’s values.)
Yes, I would say that sociopaths make moral judgments, although their moral judgments differ from mine. I realize you would not say this, not because we disagree about sociopaths, but because we disagree about whether what sociopaths make can properly be labelled “moral judgment”.
I don’t think the labeling question is terribly important or interesting. As you say, “moral” can usefully be cashed out in other terms.
You mean, the assumption that every human uses the word “morality” to refer to the same computation. Clearly, if I use “morality” to refer to X, and you also use the word “morality” to refer to X, then X and X are identical trivially. We refer to the same thing. Keep careful track of the distinction between quotation and referent.
Anyway, before I answer, consider this...
If other people use “morality” to refer to something else… then what? How could it matter how other people use words?
I agree that if you and I both use “morality” to refer to X, then we refer to the same thing.
If I use “morality” to refer to X1 and you use it to refer to X2, it doesn’t matter at all, unless we try to have a conversation about morality. Then it can get awfully confusing. Similar things are true if I use “rubber” to refer to a device for removing pencil marks from paper, and you use “rubber” to refer to a prophylactic device… it’s not a problem at all, unless I ask you to fetch a bunch of rubbers from the supply cabinet for an upcoming meeting.
But what does that mean for what you should do? Nothing, right? It doesn’t matter that someone else uses “morality” to refer to X2. If I call murder “right”, murder is still wrong. And you should still lock up murderers.
Talking about the truth-value of the assertion “murder is right” seems unjustified at this point, much like the truth-value of “rubbers help prevent pregnancy.” Is it true? Yes. Is it false? Yes. When a word means different things within a conversation, ambiguity is introduced to many sentences containing that word. It helps at that point to set aside the ambiguous label and introduce more precise ones. Which is why I introduced X1 and X2 in the first place.
I agree that the fact that X1 rejects murder doesn’t necessarily change just because X2 endorses it.
But I don’t agree that what X1 endorses is necessarily independent of what X2 endorses.
For example, if I don’t value the existence of Gorgonzola in the world, and I value your preferences being satisfied, then I value Gorgonzola IFF you prefer there exist Gorgonzola in the world.
To the extent that what I should do is a function of what I value, and to the extent that X2 relates to your preferences, then X2 (what you call “right”) has a lot to do with what I should do.
The assertion “murder is right”—by your definition of “right”, which is the only definition you should care about, being the person who formulates the question “what is right for me to do?”—has a value of TRUE precisely if X1 endorses murder. There’s nothing unjustified about saying that, since X1 was brought in specifically defined as the thing your definition of “right” refers to.
I’ll grant that it’s perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples’ terminal values. But if so that’s a question of object-level ethics, not meta-ethics.
It is not clear to me that X1 is the only definition of “right” I should care about, even if it is mine… any more than thing-to-erase-pencil-marks-with is the only definition of “rubber” I should care about.
Regardless, whether I should care about other people’s definitions of these words or not, the fact remains that I do seem to care about it.
And I also seem to care about other people’s preferences being satisfied, especially the preferences that they associate with the emotional responses that lead them to talk about that preference being “right” (rather than just “my preference”).
Again, maybe I oughtn’t… though if so, it’s not clear to me why… but nevertheless I do.
It may be relevant that this is not the only moral question I formulate. Other moral questions include “what is right for others to do?” and “what is right to occur?” Indeed, that last one is far more important to me than the others, which is one reason I consider myself mostly a consequentialist.
Maybe so. What follows from that?
Any question you could possibly want the answer to relating in any sense to “rightness” is not a question at all unless you have a definition of “right” in mind (or at the least a fuzzy intuitive definition that you don’t have full access to). You want to know “what is right to occur”. You won’t get anywhere unless you have an inkling of what you meant by “right”. It’s built into the question that you are looking for the answer to your question. It’s your question!
Maybe you decide that X1 (which is the meaning of your definition of “right”) includes, among things such as “eudaimonia” and “no murder”, “other humans getting what they value”. Then the answer to your question is that it’s right for people to experience eudaimonia and to not be murdered, and to get what they value. And the answer to “what should I do” is that you should try and bring those things about.
Yes, that’s true.
Or maybe I decide that X1 doesn’t include other humans getting what they value, and I’m only under the impression that it does because there are some things that other humans happen to value that X1 does include, or because X1 includes something that is similar but not quite identical to other humans getting what they value, or for some other reason.
Either way, whichever of those things turns out to be the case, that’s what I should do… agreed (1).
Of course, in some of those cases (though not others), in order to work out what that is in practice, I also need to know what other humans’ equivalents of X1 are. That is, if it turns out X1 includes you getting what you value as long as you’re alive, and what you value is given by X2, then as long as you’re alive I should bring about X2 as well as X1. And in this scenario, when you are no longer alive, I no longer should bring about X2.
====
(1) Or, well, colloquially true, anyway. I should certainly prefer those things occurring, but whether I should do anything in particular, let alone try to do anything in particular, is less clear. For example, if there exists a particularly perverse agent A who is much more powerful than I, and if A is such that A will bring about what I value IFF I make no efforts whatsoever towards bringing them about myself, then it follows that what I ought to do is make no efforts whatsoever towards bringing them about. It’s not clear that I’m capable of that, but whether I’m capable of it or not it seems clear that it’s what I ought to do. Put a different way, in that situation I should prefer to be capable of doing so, if it turns out that I’m not.